Abstract
Introduction Mobile electronic devices have assumed and/or complemented the place of optical, non-optical, electronic and computer resources in solving the functional alterations in people with low vision.
Objective Considering the wide and continuous use of mobile electronic devices in people’s lives, this study aimed to characterize smartphone and/or tablets applications that assume the function of Assistive Technology (AT) resources and are used in the daily life of people with low vision.
Method The methodology adopted in this investigation was descriptive in nature, under the case study design. Twenty-eight people with low vision participated in the study, who are members of an existing group in WhatsApp. The data collection took place in the virtual space of this application, individually, through semi-structured interviews. The data were transcribed and analyzed based on the founded theory.
Results Participants reported using 50 applications that enabled the use of mobile devices as AT, and nine accessibility resources through which users access to mobile devices were guaranteed.
Conclusion It was possible to identify the potential of these applications in solving difficulties faced by people with low vision, as well as portraying how this population has benefited from new possibilities in AT and information and communication technologies in navigation activities, consumption of food and purchases, execution of household, recreation and socialization, contrast, communication, labor and academic tasks.
Keywords:
Self-help Devices; Vision; Low; Devices; Mobile
Resumo
Introdução Os dispositivos eletrônicos móveis têm assumido e/ou complementado o lugar de recursos ópticos, não-ópticos, eletrônicos e de informática na resolução de alterações funcionais em pessoas com baixa visão.
Objetivo Considerando o amplo e contínuo uso dos dispositivos eletrônicos móveis na vida das pessoas, o presente estudo visou caracterizar os aplicativos de smartphones e/ou tablets que assumem a função de recursos de Tecnologia Assistiva (TA) e são usados no cotidiano de pessoas com baixa visão.
Método A metodologia adotada nesta investigação foi de natureza descritiva, sob o delineamento de estudo de caso. Participaram do estudo 28 pessoas com baixa visão, que são membros de um grupo já existente no aplicativo WhatsApp. A coleta de dados aconteceu no espaço virtual desse aplicativo, individualmente, por meio de entrevista semiestruturada. Os dados foram transcritos e analisados com base na teoria fundamentada em dados.
Resultados Os participantes relataram usar 50 aplicativos que possibilitavam o uso dos dispositivos móveis como TA e nove recursos de acessibilidade, por meio dos quais era garantido aos usuários o acesso aos dispositivos móveis.
Conclusão Foi possível identificar o potencial desses aplicativos na solução de dificuldades enfrentadas por pessoas com baixa visão, bem como retratar como essa população tem se beneficiado de novas possibilidades em TA e tecnologias da informação e comunicação em atividades de navegação, consumo de alimentos e compras, execução de tarefas domésticas, de recreação e socialização, de contraste, de comunicação, laborais e acadêmicas.
Palavras-chave:
Dispositivos Assistidos; Visão Reduzida; Dispositivos Móveis
Introduction
Visual impairment is determined by changes in the visual system that cause the inability to “see” (blindness) or “see well” (low vision), that is, it refers to the total or partial impossibility of visual ability (World Health Organization, 2019). According to the eleventh version of the International Statistical Classification of Diseases and Related Health Problems (ICD-11), a person is considered blind when their corrected visual acuity in the better eye is less than 20/400 and/or their degree of constriction of the central visual field in the better eye is less than 10 degrees; and considered a person with low vision when their corrected visual acuity in the better eye is less than 20/70 and greater than or equal to 20/400 (World Health Organization, 2019).
It is understood that partial vision loss can cause changes in the lifestyle and occupational involvement of people with this condition (Ferroni & Gasparetto, 2012). Read books, tickets, panels, posters; identify traffic signs, colors and bank notes; recognizing people and driving often become impossible (Fok et al., 2011). However, the use of Assistive Technology (AT) has enabled people with low vision to lead active and productive lives, with a degree of independence not experienced a few decades ago, which impacts their occupational performance (Geruschat & Dagnelie, 2017; Mello & Mancini, 2007). For this reason, it can be said that AT is related to aspects of the occupational therapy domain, “[…] which reside in knowledge about the relationship between the individual, their involvement in significant occupations and the social and environmental contexts in which they are inserted” (Teodoro et al., 2023, p. 3).
The concern of researchers and contemporary companies with the needs of access to technological products and their usability in overcoming everyday barriers has led to the creation/production of more devices every day within the perspective of universal design and assistive technology resources (Pache et al., 2020; Sausen & Frozza, 2022). Computers, smartphones, tablets and notebooks generally come equipped with accessibility features that seek to solve access problems for people with disabilities. Due to this concern, these resources have the potential to improve the daily lives of people with disabilities (Manduchi & Kurniawan, 2017).
Visual impairment is determined by changes in the visual system that cause the inability to “see” (blindness) or “see well” (low vision), that is, it refers to the total or partial impossibility of visual ability (World Health Organization, 2019). According to the eleventh version of the International Statistical Classification of Diseases and Related Health Problems (ICD-11), a person is considered blind when their corrected visual acuity in the better eye is less than 20/400 and/or their degree of constriction of the central visual field in the better eye is less than 10 degrees; and considered a person with low vision when their corrected visual acuity in the better eye is less than 20/70 and greater than or equal to 20/400 (World Health Organization, 2019).
It is understood that partial vision loss can cause changes in the lifestyle and occupational involvement of people with this condition (Ferroni & Gasparetto, 2012). Read books, tickets, panels, posters; identify traffic signs, colors and bank notes; recognizing people and driving often become impossible (Fok et al., 2011). However, the use of Assistive Technology (AT) has enabled people with low vision to lead active and productive lives, with a degree of independence not experienced a few decades ago, which impacts their occupational performance (Geruschat & Dagnelie, 2017; Mello & Mancini, 2007). For this reason, it can be said that AT is related to aspects of the occupational therapy domain, “[…] which reside in knowledge about the relationship between the individual, their involvement in significant occupations and the social and environmental contexts in which they are inserted” (Teodoro et al., 2023, p. 3).
The concern of researchers and contemporary companies with the needs of access to technological products and their usability in overcoming everyday barriers has led to the creation/production of more devices every day within the perspective of universal design and assistive technology resources (Pache et al., 2020; Sausen & Frozza, 2022). Computers, smartphones, tablets and notebooks generally come equipped with accessibility features that seek to solve access problems for people with disabilities. Due to this concern, these resources have the potential to improve the daily lives of people with disabilities (Manduchi & Kurniawan, 2017).
Mobile electronic devices, in the case of people with low vision, have taken over and/or complemented the place of conventional AT resources in solving functional problems, which has been occasionally described in the literature (Cook & Polgar, 2015; Fok et al., 2011; Manduchi & Kurniawan, 2017; Thomas et al., 2015). Furthermore, the Convention on the Rights of Persons with Disabilities and the legislation that establishes its precepts establish the right of people with disabilities to participation and social inclusion, which can be assisted by AT (Organização das Nações Unidas, 2006). Considering the wide and continuous use of mobile electronic devices in life, this article aims to characterize smartphone and/or tablet applications that serve as Assistive Technology (AT) resources and are used in the daily lives of people with low vision.
Methodology
The method adopted in this investigation was descriptive in nature, under a case study design. Case study research aims to promote an in-depth analysis of the issue investigated, within its context, in order to understand the problem from the perspective of the participants (Merriam, 1998; Simons, 2009; Stake, 2006; Yin, 2014). This research design focuses on understanding how specific groups of people face certain problems, adopting a holistic view of the situation (Merriam, 1998).
In this context, the case study constitutes the most appropriate design for the present investigation, which aims to understand a contemporary phenomenon, within a limited system based on the perceptions of the research subjects. These characteristics corroborate the definition of Yin (1994, p. 13), who conceives “[…] a case study (as) empirical investigation that investigates a contemporary phenomenon within its real-life context, especially when the boundaries between the phenomenon and the context are not clearly evident”.
The aspects mentioned in the definition are present in the object of study. The contemporary nature of the phenomenon in this research is related to the understanding of the use, role and everyday application present in mobile electronic devices such as AT. This understanding came from reports from users with low vision through semi-structured interviews. And the delimitation of the system or social unit investigated is manifested in the selection of participants within a social media environment, a WhatsApp group.
The research was submitted to the Human Research Ethics Committee of the Federal University of São Carlos (UFSCar), Brazil, and approved by CAAE: 74755017.8.0000.5504. After approval, participants signaled their consent by responding, via voice message recorded via the WhatsApp application, “I accept to participate in the research” to the Free and Informed Consent Form.
28 people with low vision participated in the study, who were selected from the Stargardt group1 on the WhatsApp application, made up of 104 members, made up of people with visual impairments, as well as their parents/guardians and spouses. This research included people with low vision, over 18 years of age, users of AT applications on smartphones or tablets. Consent and compliance with the research inclusion criteria were obtained by 28 of the 104 members of the Stargardt group, and all completed their participation by responding to the semi-structured interview script.
Of the participants, 13 were women (46%) and 15 men (54%), with an average age of 35 years, with a range between 18 and 63 years, distributed throughout the national territory and one resident in another country, with representatives from approximately ten states, namely: São Paulo (n=13), Minas Gerais (n=6), Paraná (n=2), Bahia (n=1), Rio de Janeiro (n=1), Rio Grande do Norte (n= 1), Rio Grande do Sul (n=1), Santa Catarina (n=1), Tocantins (n=1) and New Jersey – United States (n=1).
Regarding the level of education, 32% (n=9) had completed higher education, 25% (n=7) postgraduate, 18% (n=5) completed high school, 11% (n=3) incomplete higher education, 7% (n=2) Technical Course, 3,5% (n=1) Incomplete Secondary Education and 3,5% (n=1) Incomplete Elementary Education. The group's professional performance was categorized as: professional activity 46,4% (n=13), out of work or unemployed 14,3% (n=4), retired 17.8% (n=5) and students 21,5% (n=6).
Regarding the cause of low vision, 96% of participants were affected by Stargardt's disease, and of these, one participant had Stargardt's and retinitis pigmentosa. Stargardt's disease is a progressive, hereditary autosomal recessive retinal dystrophy, usually bilateral, which often begins in the first two decades of life and mainly affects central vision (Aragão et al., 2005). Furthermore, one participant (3,5%) declared having strabismus and nystagmus as the cause of low vision; however, these symptoms are not characterized as a cause, which leads us to conjecture about his lack of knowledge of the pathology that generates his visual condition. In this universe, based on visual acuity, 32% (n=9) had moderate visual loss (<20/60 and ≥ 20/200); 46% (n=13), severe visual loss (<20/200 and ≥20/400); 18% (n=5), profound visual loss (<20/400 and ≥ 20/1200); and 3,5% (n=1) were unable to provide information (Ferroni & Gasparetto, 2012). Regarding the approximate age at which they were diagnosed with the disease, 46,4% (n=13) reported having been diagnosed before the age of 10, 32% (n=9) in the range of 10 to 20 years, 18% (n=5) between 20 and 30 years old, and 3,6 (n=1) over 30 years old. Regarding visual field impairment, 82% (n=22) indicated a loss of central vision, 18% (n=5) had central and peripheral visual loss.
Data collection was carried out through semi-structured interviews in the virtual space of the WhatsApp application. After submitting to the inclusion criteria, participants were called individually for a private conversation between researcher and subject, on the application, to schedule the interview.
At scheduled times, interviews were carried out using the recorded voice message feature. Thus, with each participant, in a private conversation (outside the group), the main researcher (first author) began the interview by sending question by question in recorded voice messages, and the participants answered the questions in the same way. When submitting a question, the participant listened to the question and then responded via audio recording. When necessary, after listening to the participants' responses, additional questions were asked. The response time was immediate (synchronous communication), and the duration of the interviews varied from 59 minutes to three hours and nine minutes. In cases of unforeseen events during the interviews, they were interrupted and resumed at a time more suitable to the participant, but always carried out synchronously.
The transcription texts were organized into categories – applications used, functionality, positive points, negative points and demands –, which were analyzed according to grounded theory. This article will focus on discussing just one category, namely: “used applications”. In order to organize and classify the applications identified in this category (applications used), functional and structural criteria present in conventional AT resources for people with visual impairments were used, the result of which was the proposition of 11 categories of applications that will be detailed in the results.
In describing the results, the letter P was adopted to identify the research participants, followed by cardinal numerals (1, 2, 3...), which indicate the order of interviewees, that is, participant P1 was the first to be interviewed, and so on. In order to systematize the transcription, the following signs used in transcriptions of oral information were adopted: (+) for pauses; [...] to delete sections; () when part of the speech was not understood and it was assumed that they had heard; UPPERCASE when syllables or words were pronounced with greater emphasis; and (()) to infer some of the researcher's statements.
Results and Discussion
50 application programs (Apps) that enabled the use of mobile devices as AT were identified, and nine accessibility features through which users were guaranteed access to mobile devices were also identified. In order to organize the 50 application programs listed, four (4) application categories were created, which followed the following inclusion criteria:
-
AT applications (n=26): when the genesis of their creation and purpose included meeting needs directly related to people with visual impairments. Example: when developing an electronic magnifier/video magnifier application, the manufacturers aimed to help people with reduced vision.
-
Applications (used) as AT (n=10): when its design was not intended to assist people with visual impairments, but the functions of the applications used strategically offered conditions for access to visual information. Example: The Google Translate App offers scanner and audio reading tools for translated content. However, people with low vision use them to access correspondence information, for example, they configure the application to translate “Portuguese to Portuguese”, scan the correspondence (invoices, bills, letters), submit it to translation processing and ask the App to read the “translated” content. Therefore, these are applications whose functions and strategic use by people with low vision make it possible to access printed content and act as a AT.
-
Applications that assume an AT function (n=7): in this category, there are Apps that were not designed for people with visual impairments and whose functions are not related to solving problems faced by this public, but the simple fact being on devices with accessible interfaces makes it possible to take advantage of their functionalities. Example: the Clock App allows visually impaired people to access the time using magnifiers and/or screen readers. Before these mobile devices, a person would have to purchase a sound or tactile watch.
-
Applications that make content accessible and modify environmental conditions (n=7): in this category, applications that made content accessible were included, such as scanners and optical character recognition (OCR) programs, which modified environmental conditions, such as the sharpness of digitized images and outdoor lighting (smartphone flashlight).
Figure 1 illustrates the categorization of applications and accessibility features raised in this research. It is important to highlight that accessibility features are (native) input applications, which allow the use of the device and are generally used in conjunction with other “Apps”. All applications and accessibility resources mentioned in this research can be considered as AT, as they enhance visual functioning and enable people with low vision to perform in their daily activities, contributing to their autonomy and participation in the most varied activities (Ferroni & Gasparetto, 2012).
Representation of the use of mobile devices as AT, highlighting the interrelationship between native applications (accessibility features) and AT Apps, such as AT, take on the role of AT, and which make content accessible and modify environmental conditions. Source: Prepared by the authors, based on the applications and accessibility resources listed by participants with low vision.
Regarding the function of applications used on mobile electronic devices, they generally presented equivalence with conventional resources. Due to this similarity, functional and structural criteria present in conventional resources were used to classify the aforementioned applications and accessibility resources. In this way, 11 categories of applications were defined, namely: 1) Electronic Magnifiers/video magnifiers; 2) Text readers; 3) Scanners and OCR programs2 ; 4) Identifier; 5) Keyboard view; 6) Banknote identifier; 7) Orientation and mobility related to public transport; 8) Geolocators; 9) Digital and PDF bookshelf; 10) Utilities; and 11) Modify materials and environmental conditions.
Electronic magnifiers/video magnifiers
The electronic magnifiers and video magnifiers category encompasses applications that, using the smartphone and tablet camera, capture and focus the image and provide the magnified image on the device screen. Thus, when pointing at a certain object or printed content, it allows you to enlarge and reduce the image captured on the device. The resources in this category have common characteristics described in conventional electronic resources: camera, lens and screen (Geruschat & Dagnelie, 2017).
As for the specifics, video magnifier applications have functions such as image freezing, lighting and changing contrast. Image freezing allows users with low vision to capture an image they want to view (usually texts), avoiding loss of focus when accessed directly through the camera. Furthermore, when freezing the image, it is possible to provide greater magnification, accumulating the application's magnification with the device's own accessibility features, such as zoom (screen enlargement). The lighting in some of the applications is carried out using the cell phone's flash, which illuminates the surface to be magnified by the camera, enabling better image quality. These features can also allow you to change the contrast of the enlarged text, placing a black background and white letters, or a blue background and yellow letters, among other options, depending on the specifics of the applications. Regarding this category, the following excerpts illustrate how it works:
Excerpt 1: Super vision + is an application that works almost like an electronic magnifying glass. So it (+) focuses, right. We can focus, for example, on a tag, a label, a price in the supermarket (+), quick things (+). We have to focus, find the distance to focus clearly. It has a very large, very wide magnification. And there's the flash and there's the freezing. It freezes the image so we can read it, move the photo (+), it's as if it were a photo. Yes, move it as if it were a frozen photo, the frozen image. This also makes it easier when trying to read something that is a little difficult, that is wobbling a little. And it's very good, I really liked this app (P24).
Excerpt 2: [...] I hadn't even remembered, there's a magnifying glass that I downloaded. It's good, but I hardly use it anymore. I used it a lot when I downloaded it. Yeah, a really cool magnifying glass, cool, but I hardly use it anymore. It even changes the color of the text, but the big problem with using it is that it has to be quick. You can't read a large text. She even changes the colors, she adds contrast, she’s cool (P16).
Excerpt 3: answering your question: yes, I only use these resources all the time. Let's go. The cell phone itself, I have an iPhone, the cell phone itself already has mechanisms that help in this process, to overcome this difficulty. On the iPhone, I use zoom, the one where you tap the screen twice with three fingers to enlarge. And I also use the camera itself, to take photos of a restaurant menu, an advertisement, both in short and from a longer distance. [Of] application programs I use a magnifying glass too, I think the name is super vision. It has an enlargement function, obviously, and a freezing function. I can freeze the image and then use my fingers as a pinch, that command to increase and decrease the screen. Its magnification is very good, I can't even say how much, I never use maximum magnification (P20).
Although there are specific applications that act as electronic magnifying glasses and video enlargers, many subjects mentioned the cell phone camera associated with the screen magnification features (Excerpt 3). When photographing an image and then enlarging the photo, research participants reported being able to access information, just like when using electronic magnifying glass applications. These strategies were observed by families and teachers of children and young people with low vision, who used the magnification functions of mobile devices to magnify text or images, and access information more independently (Thomas et al., 2015). Therefore, due to its functional characteristics, the camera, when used for these purposes, could be classified by the researchers, in this investigation, as an AT application.
Text readers
Text readers generally work by transforming written content (accessible PDFs or digital books) into sound information (Alves Guimarães et al., 2023). Among the main specificities of the applications mentioned, the following possibilities were highlighted: changing voices and reading speed, reading in different languages, changing the text font, highlighting lines and words read, highlighting excerpts and inserting comments in text form and audio, in addition to transferring excerpts of texts from the internet to be read, as well as copying content from applications. Each text reader application presents a different set of functional options, which can be illustrated in the following excerpts:
Excerpt 4: The first application I use is an application called @voice. This application is a....TTS, which is text to speech. He reads the text out loud, right? I know there are other applications that perform the same function, but this application, the voice it uses is from Android itself (++), right, which is my cell phone's platform. And then I can/I find a natural voice, a good voice, so I can manipulate the speed well, which is pleasant and at the same time agile so I can hear it more quickly. This @voice allows you to paste a long text into it, and it reads the entire text. So, with some speed and pause features. I use this app mostly to read long texts. [...]. Another possibility is that this @voice, you can copy any text from the internet, anywhere you can copy a text, you can use the share function to throw that text into the software and it automatically reads it too. It makes it much easier for me to read long texts, in fact it wouldn't be reading, it would be listening to the texts. They are long texts that I would have to strain my vision to read (P21).
Excerpt 5: Voice Dream is great! I've always really liked reading, so I download books, and there's the positive point of being able to change the speed of speech, change your voice, stop exactly where you left off/and then, when you return, it's there at the same point, being able to go back, being able to move freely through the text, right (+), so that you have the sequence and the correct understanding of what you are reading (P10).
These applications impact the reading habits of this population, providing immediate access to any content written in digital format. They are used more frequently in long readings, which minimizes the effects of visual strain.
Zen et al. (2023) described a systematic mapping of the literature on AT that helps people with visual impairments to access digital systems, in which they found that many advances have been described in the literature, including the use of screen readers. However, they indicate that some difficulties can make the experience with information and communication technologies through the use of screen readers bad, such as: a) information overload, this is because they read sequentially with linear flow, generally carried out in the top left to right; b) to locate a desired element/section, users often need to jump through a large list of sequentially organized elements; c) its use may be affected if the user is in places with noise or limited privacy; d) screen readers cannot describe images or layout of a given interface; e) screen reader developers generally do not have visual impairments, showing limited understanding of the needs of people with visual impairments. Despite the difficulties reported by the cited authors, they point out that many principles of screen readers are common, allowing generalization. Understanding the limits of AT use can help and direct habilitation and rehabilitation services, especially those related to reading tasks (Zen et al., 2023).
Scanners and OCR programs
Often, images captured by the camera are not compatible with text readers and, consequently, cannot be read. For these cases, there are solutions based on computer vision. The efforts of this type of technology, also known as machine vision, allow people with disabilities to “see” like people without disabilities. The basic approach is to use a form of artificial intelligence to analyze visual information from an image or video, acquired by a camera, and use software algorithms to infer important visual elements (Coughlan & Manduchi, 2017).
Camera-based access to visual information provided by computer vision is far from a solved problem, but progress over the last few decades has led to a variety of successful algorithms, including OCR, with recognition of objects (including faces), colors, barcodes and bank notes (Coughlan & Manduchi, 2017).
At first, this software was only present on computers, but as these devices became more powerful and compact, they enabled mobile platforms, such as smartphones, that put the power of computer vision in the user's hands (Coughlan & Manduchi, 2017). This trend, described in the book Assistive Technology for blindness and low vision (Manduchi & Kurniawan, 2017), was also recorded in this work.
Participants transformed files that were incompatible with text readers and camera captures (printed in image format) into digital files accessible with the help of digitizing applications and OCR programs. Applications in this category digitize printed files, in which the written records of these files are recognized by OCR, and the images are modified into accessible digital files. After this transformation, it is possible to share the file with text reader applications or use the device's reader and/or screen magnifier and have access to printed information and textual content of images.
The usability of these resources is illustrated in the following excerpts:
Excerpt 6: The one above that I sent you ((the link)) is a text fairy, it basically reads the text/takes a photo and converts it to OCR. At least you can get the text, select and copy it, you know! In that other one, which is EYE-D [...], it doesn't allow it. He just takes the photo and reads it. So (++), it reads it and you can't repeat it. In the version I have, which is the free version. To repeat you have to take the photo again and wait to hear the text (P26).
Excerpt 7: Application program (+) application program (+) like this: directly for the difficulty of vision, like this (+) I don't use any. I use my cell phone to read, and it helps me a lot. And the application I use is like a scanner ((office lens)), which uses the cell phone camera, right; and, for example, when there is a document, a physical page of a book that I need to read, I take the photo with my cell phone, or even scan it on the printer, but it is easier, often, in the classroom, I take the photo with my cell phone, and then it generates a PDF, right. And then I use the zoom on my phone, for reading, right (P2).
Excerpt 8: There is an application that is Google Translate. He's precarious for me, he's not very good, but he's a great asset for me. Because, sometimes, I have difficulty, I live alone, and I have difficulty reading the correspondence that arrives, the bills, the amount of the bills, the payment date. So, through this application, Google Translate, I can scan, right (+), digitize the document, and I can, through it, see the values and dates ((it also uses the voice over that reads the textual information digitized by Google translator)) (P22).
In this context, scanning applications and OCR programs allow access to information contained in printed materials, one of the biggest barriers to reading access faced by people with visual impairments. Additionally, digitized content can be directed to vision-enhancing applications, such as screen magnifiers, or visual replacement applications, such as text readers and screen readers.
Identifiers
In addition to applications that assist in reading texts (magnified or audio), the research subjects cited applications that identify objects, colors, barcodes, QR codes and texts contained in images, signs, pictures, business cards and scanned printed matter (Sausen & Frozza, 2022; Sonza et al., 2016). Generally, these applications are used to recognize food product packaging, accessing label information, such as: product name, nutritional value, expiration date, how to use, among others.
Applications that identify colors and objects can work through video calls (Be My Eyes) to volunteers who provide visual assistance, or through a database of products and objects previously cataloged in the application (Tap TapSee) (Sonza et al., 2016). Using the camera, this application tries to recognize the object by similarity or, even, by recognizing QR codes and barcodes, which emit the registered information through audio.
Excerpt 9: [...] at home I have already used Tap TapSee. It's an application that uses the camera and you point your cell phone at the object, right. For things you need, which is usually: (laughs) kitchen things, right (+), expiration date, preparation method, flavor, because it says it all. If you point your cell phone at a candy, it tells you the brand of candy, the flavor and even the color of the packaging. So, yes, I have already used it, because, when we are alone, the way to prepare a, oh, I'll give you an example, a polenta or a cake, things like that. Yes, I already used it (P23).
According to Coughlan & Manduchi (2017), camera-based access to information implicitly performs some type of sensory substitution. Visual data is “processed” by the computer vision algorithm, and the product of this processing is communicated to the user through their remaining senses. The authors point out that the appropriate positioning of the camera, so that the image is captured clearly and sharply, is a challenge when using these applications by people with visual impairments.
Some applications that enable access to information through cameras are bank note identifiers. Its functionality involves the identification and subsequent audio reading (in the language of the country of origin) of the value of the banknote.
Excerpt 10: [...] the Blind wallet is used to identify banknotes or coins. Because, especially here in the United States, where the bills are the same color, and the coins don't have numbers, identification is very difficult. So, I use this app to help identify the note or coin (P15).
The main specificities of the Apps in this category are the identification of banknotes from different countries and, in the case of the application used by the participant who lived in the United States (P15), it also recognizes currencies.
Keyboard view
Regarding writing, only three applications were mentioned that are generally related to enlarging keyboard icons and changing contrast. The following excerpt exemplifies this purpose:
Excerpt 11: Currently, I'm using a giant keyboard, it's not that big, the letters are so huge, but it's the best I've found at the moment. So, I need a keyboard that has larger letters so I can see and type, right? Because common or native Android keyboards, the keys are very small and I can't read them or put my finger in the right place. So one feature I need to use is the enlarged keyboard (P21).
It was found that users with greater visual acuity make use of these applications that make it easier to see the keyboard and that the others use the accessibility features of mobile devices to carry out writing activities, such as: typing by voice command and assistants’ voice. Dias & Vieira (2017) carried out bibliographical research to understand the process of teaching and learning reading and writing for people with visual impairments, in which they described the instruments and strategies used and the challenge faced with the use of technologies. The authors recognize the importance of using computer technologies in expanding opportunities to perform reading and writing tasks, but discuss that, when it comes to the teaching and learning process, they should be used as complements to the Braille system, way to mean it (Dias & Vieira, 2017). The expansion of opportunities to obtain and record information provided by the use of strategies such as enlarged keyboards and voice command typing provide access to information technologies and assist the functional performance of daily activities, complementing other forms of recording.
Geolocators, orientation and mobility related to public transport
The applications that assist in orientation and mobility tasks were divided in this investigation into two categories: geolocation and orientation applications and mobility related to public transport.
In relation to public transport, the applications described by the participants aimed to provide information about bus stops, routes and show the location of the bus in real time, allowing people with low vision to predict the time the bus will arrive at the stop, reducing the chances of boarding the wrong bus due to lack of visual access to the itinerary sign. As for the specifics, there were applications that were aimed at the general population and only provided real-time information on the location of the bus, which suggested to users an estimated arrival date; while more complex versions with specific tools allow people with visual impairments to access an exclusive field of the application. In this field, you can register your favorite stops and find the closest stop to where the user is. The available lines will be displayed on the App and, upon selecting the desired line, the driver is notified that the visually impaired person is waiting. Additionally, the user's cell phone vibrates as the time for the bus arrives approaches. This information can be illustrated in the following excerpts:
Excerpt 12: [...] I use Moovit, which is not an accessibility application for the visually impaired, but it is very important for me. It's for getting around here in São Paulo, which is a bus information app, when the bus is arriving, how long it's taking. For people who can't see bus signs, it's a big help (P10).
Excerpt 13: [...] there is another one that I use very frequently called SIU Mobile, which is an app for Belo Horizonte, for transport in Belo Horizonte, which I use/I catch buses with this app. In fact, I only catch the bus with this app, without this app I can't catch the bus. Because in this application you register the number of / its function is people with visual disabilities/ people with disabilities in Belo Horizonte who use public transport to take the bus with more autonomy. This is the function of the application. This application is available to the entire population with general information. And then there is a part in this application where you enter the beneficiary's number, the free pass number. And then you access an area that is restricted to people who benefit from the free pass, that is, people with disabilities. And then you register your favorite stops, or if you are in a place that is not your favorite stop, you (+) and (=) based on the location, right, from your cell phone's GPS, it sees where you are. Then he says are you close to such and such a stop. Then you will have to find out which stop you are close to. But this is usually not complicated. And then the lines are all there. Of course he is accessible with voice over, etc., he must also be accessible with talk back, but anyway, he is accessible. And then you select the bus you want, and you let the driver know you're there. And then this app vibrates. Like, with five minutes left, he makes five vibrations, three and so on. From five, I think. And then the driver stops and calls you. So, it's an app that I use a lot, a lot, a lot to catch the bus (P8).
Even applications that are not specific for people with visual impairments helped them board the correct bus. Furthermore, people with low vision reported using geolocation applications to find their way around and move around safely. These resources were often used to confirm addresses (barrier to visual access to signs with street names) and to move around unfamiliar spaces (Excerpt 13). In all cases, compatibility with accessibility features was highlighted as essential.
Excerpt 14: This Via Optanave app, I use it the most. Especially when I need to know street names. Or the place where I have to go, the direction I have to take, and things like that. Even though, for example, I can manage well on the street, because I have lateral vision. Often, I ask/when I'm confused about a street name, I end up/learned how to use it, right? I had to learn the tricks, to know how to use them. Understanding that it doesn't always work 100%, but it gives me good guidance when I need it, knowing where I am, which street I'm on. [...]. Waze is when I'm in a car, I'm riding with someone, we need to get around, know where to go (+) and things like that. I also try to enlarge the screen, it doesn't always work, I think it's the same system as Uber, but it actually ends up helping a lot (P14).
The use of GPS applications and conventional accessible geolocation resources are also recorded by May & Casey (2017). According to the authors, navigation difficulties for blind people and those with low vision derive from the impossibility and/or insufficiency of seeing location information around them, such as signs and landmarks, making it difficult to determine the precise direction to move efficiently from a place. for the other. Thus, these users' needs for orientation and mobility are limited to obtaining, interpreting and using navigation information, which is currently transported using accessible geolocators, including those present on mobile electronic devices.
Applications for orientation and mobility for people with visual impairments were also described in the research by Silveira & Dischinger (2017), who sought to understand the perceptions and experiences related to the use of public transport by blind people and people with low vision in different regions of Brazil, as well as their concerns regarding issues of spatial accessibility, within the scope of an online discussion group. Based on the participants' reports, it was observed that the informants in the discussion group idealize an information system based on a sound system, and with the use of technologies, mainly linked to GPS systems, their own equipment or smartphones, through of applications. Furthermore, it was noted that informants were concerned about being informed in all aspects, from traveling on foot, to shelters, stations and within trips (Silveira & Dischinger, 2017, p. 133). However, the authors consider that, to enable spatial accessibility for this population from the place of departure to the final destination, considering travel on foot and using public transport, it is necessary to consider resources such as tactile clues, sound traffic lights, maps and models tactile, tactile floors, audio announcements at boarding points and inside vehicles informing the next stops and final destination, digital signs with high contrast and large letters for people with low vision (Silveira & Dischinger, 2017).
Considering these aspects, it can be inferred that the use of geolocation applications and the use of public transport can be incorporated into orientation and mobility training programs, in order to complement traditionally established orientation and mobility practices and resources.
Digital and PDF bookshelf
Another category mentioned by participants was virtual bookshelves and PDFs. According to them, these applications are increasingly accessible, being compatible with screen readers on mobile devices, allowing sharing, changing the font and contrast, in addition to some already making audiobooks available. Access to the internet, the existence of applications that allow the acquisition of digital books and their compatibility with the accessibility features of mobile devices have transformed the reading habits of this population, as highlighted in this excerpt:
Excerpt 15: [...] it greatly increases accessibility to content, books, articles, news, something that a person who has low vision and does not have access to this type of technology does not have access to this type of content. And it would probably be very limited to find books in Braille, or even magnifying glasses and conventional book readers (P15).
The instant transformation made possible by enlarging, reader and digitizing Apps, as well as the possibility of purchasing digital books with just one click, changes the scenario of this population's reading habits, constituting a solution to the decrease in reading and writing practices evidenced by literature (Monteiro & Carvalho, 2013).
Utilities
In this category, applications were grouped that allow people with low vision to carry out activities, simply by being on a device (smartphone and tablet) with various accessibility features. Among the examples cited, there are: clocks, alarms, calculators and diaries, which are immediately accessed with magnifiers; screen readers and voice assistants, making your information accessible. These Apps replace adapted clocks and alarms, sound calculators and expanded diaries/calendars.
Excerpt 16: I use the calculator to do my quick math. I normally use the calculator with a screen magnifier. Then I tap three times and the screen enlarges. I can increase it to the size I need (+) that I may need (P14).
Excerpt 17: There is also Ok Google, which is an Android function. It also gives me a lot of information, it's like Siri on the iPhone. With this Ok Google I can ask things. So, for example: Ok google what time is it? And he tells me: now it's one hour and fourteen minutes, for example. Did you understand! So, this makes it a little easier for me to not have to look at things, right. Sometimes I ask: I say Ok Google, remember (+), then it can set an alarm. It can do a search for me, there are several things that Ok Google does. I also use it to open applications. It has several features and is called Google Assistant, it is a native Android function (P21).
The possibility of recording appointments and writing down content of interest in the calendar and notes applications was also highlighted by participants, as, in addition to the possibility of using the device's adapted keyboard and/or the keyboard's microphone, the content recorded later could be consulted with the use of magnifiers and screen readers.
The recorder is used to record academic or work content, as follows:
Excerpt 18: There's an application that I'm using now that I didn't use in the past, I didn't have that. It wasn't possible to do it this way, which is the voice recorder. So, when I mainly studied, I had to pay a lot of attention. When it got a lot better, it was when I started dating my wife, when we studied Engineering, she wrote things down and I was able to pay attention. So, today, for that, there is the issue of the voice recorder. So, when I go to a lecture, or I go to a meeting, or I go somewhere/to compensate for what I can't see, I'm using the voice recorder a lot (P14).
Bank branch Apps allow participants to check their balance/statement, make transfers, make payments, among other activities, thanks to compatibility with accessibility features (Excerpt 21). Likewise, the guitar tuner application allows people with low vision to have sound or increased access to the instrument's tuning values, previously impossible with conventional tuners. Thus, the simple compatibility of these applications with accessible interfaces guaranteed autonomy for people with low vision in their intended activities.
Modify materials and environmental conditions
These are applications that improve the ambient brightness and sharpness of images captured by the camera, as these excerpts illustrate:
Excerpt 19: The flashlight, the flashlight, I use every time I'm in a situation where there's little light in the place. And as time went by, it got worse for me, you know. So, when I go into the cinema, when I'm on the street and there's little lighting where I'm walking, or when I go down a staircase and there's little lighting for me. So, I use it in these places (P14).
Excerpt 20: Well, I've already used Cymera, which is a photo application that people use to edit photos. And I use it when I'm going to take a photo, and the photo doesn't come out sharp. Or when I go to take a photo of some paper and it doesn't come out sharp. There is a function called sharpness, where you swipe your finger and the photo becomes extremely sharp (P6).
The Flashlight app uses your phone's flash to simulate a flashlight. This function was used in environments with poor lighting, such as when going down/upstairs where lighting conditions were compromised, for example. The Cymera application, another example, was used to change the sharpness of scanned images and ensure optical character recognition in OCR programs or for viewing via screen magnifiers.
In general, it is clear that the 50 applications and nine accessibility resources were used to assist tasks related to reading, writing, navigation, entertainment, aesthetic, domestic and work activities, among others. A considerable portion of the occupational activities exemplified by the participants in this research were identified in the research by Stelmack et al. (2003) as activities that generate demands for AT devices for people with low vision. The authors investigated the perceptions of 149 men and women, over 50 years old, about the needs of low vision devices. The results demonstrated that the most frequently reported uses of low vision devices were for reading tasks at different distances – near, intermediate and far; television viewing; people recognition; and find items. In addition to determining the frequency of use of AT devices and the respective occupations for which they were intended, Stelmack et al. (2003), through a literature review, created a list of approximately 60 occupations for which AT resources were considered useful. The main categories of occupations identified were: travel/navigation activities (e.g.: finding addresses; recognizing traffic signs, viewing cars at the crossing); food and shopping (e.g., identifying foods; reading menus); domestic tasks (e.g.: reading tapes and rulers; using scales, trimming bushes; taking care of the house); self-care (e.g.: applying makeup, combing your hair, shaving; cutting, filing and polishing your nails); recreation/socialization (e.g. watching films on television, theater or sporting events at a distance; watching television up close); communication (e.g.: scanning printed matter; reading large print); and contrast (e.g., adjusting for changes in lighting conditions; reducing glare indoors and in natural environments) (Stelmack et al., 2003).
Although the aforementioned occupations that have useful AT resources were described almost two decades ago, when relating them to the experiences of participants with low vision in this research, the incorporation of mobile devices into the list of AT resources for habilitation and rehabilitation of people with low vision, whether in reading, accessing information, navigation, work, entertainment or domestic activities.
Final Considerations
Increasing technological advancement allows devices to have, coupled to their operating system, or make additional applications available in their virtual stores that assist people with disabilities in accessing and carrying out various activities, and, as a result of their benefits, these have taken up space privileged in AT resources used by people with low vision.
By setting out to characterize the profile of AT resources on smartphones and tablets, this research identified 50 applications, and nine accessibility resources used by people with low vision in browsing activities, consuming food and shopping, performing household tasks, recreation and socialization, contrast, communication, work and academics. Applications and accessibility resources were generally used in combination, with countless possibilities for arrangements that meet the most varied visual conditions and interests of users, allowing the execution of different tasks. Among the great advantages of these devices, the accessibility features that allow users with low vision to access and acquire AT applications stand out.
Based on this survey, it was possible to identify, therefore, the use of these applications in solving difficulties faced by people with low vision, as well as portraying how people with low vision can benefit from possibilities in AT and information and communication technologies and what are the tasks that these resources have helped.
The results described in this research can contribute to services, habilitation, rehabilitation, activities of daily living, instrumental activities of daily living, special education, guidance and mobility and health, some of which are the domain of occupational therapy. Knowing how mobile devices are used in the daily lives of people with low vision can influence the expansion and complementation of practices by different professionals, given the understanding of the possibilities and limits of using mobile devices as AT. The adoption of mobile devices as AT and their incorporation into AT implementation processes can enable people with low vision to acquire, after the learning process, the ability to perform tasks similar to people without disabilities.
The survey was carried out with a heterogeneous group in terms of age, visual performance, work and academic activities. Therefore, it would be interesting if there was more diversity regarding the cause of visual impairment, which could provide a different profile of applications and tasks. Therefore, we suggest, for future studies, participants with greater diversity regarding the causes of low vision, in addition to investments, dissemination and teaching programs to increase the use of these applications.
-
1
The Stargardt virtual group is the social unit researched and is an organization of people with low vision and their families in a social media space, the WhatsApp application.
-
2
Acronym in English for Optical Character Recognition.
-
How to cite: Borges, W. F., & Mendes, E. G. (2024). Assistive technology and low vision: applications and accessibility resources in mobile devices. Cadernos Brasileiros de Terapia Ocupacional, 32, e3746. https://doi.org/10.1590/2526-8910.ctoAO288437462
References
- Alves Guimarães, U., Ribeiro, M. Q. B., Gonçalves, M. A., Silva, J. S., Costa, J. A. S., Abreu, R. C., & Rodrigues, E. S. (2023). Os desafios enfrentados pelos professores na inclusão de crianças com deficiência visual na sala de aula. Revista Científica Multidisciplinar, 4(11), e4114325.
- Aragão, R. E. M., Barreira, I. M. A., & Holanda Filha, J. G. (2005). Fundus flavimaculatus e neovascularização subretiniana: relato de caso. Arquivos Brasileiros de Oftalmologia, 68(2), 263-265.
- Cook, A. B., & Polgar, J. M. (2015). Assistive technologies: principles and practices. St. Louis: Elsevir-Mosby.
- Coughlan, J., & Manduchi, R. (2017). Camera-based access to visual information. In R. Manduchi & S. Kurniawan (Eds.), Assistive technology for blindness and low vision (1st ed., pp. 219-243). London: CRC Press.
- Dias, E. M., & Vieira, F. B. A. (2017). O processo de aprendizagem de pessoas cegas: um novo olhar para as estratégias utilizadas na leitura e escrita. Revista Educação Especial, 30(57), 175-188.
- Ferroni, M. C. C., & Gasparetto, M. E. R. F. (2012). Escolares com baixa visão: percepção sobre as dificuldades visuais, opinião sobre as relações com comunidade escolar e o uso de recursos de tecnologia assistiva nas atividades cotidianas. Revista Brasileira de Educação Especial, 18(2), 301-318.
- Fok, D., Polgar, J. M., Shaw, L. E., & Jutai, J. (2011). Low vision assistive technology device usage and importance in daily occupations. Work,1(39), 37-48.
- Geruschat, D., & Dagnelie, G. (2017). Low vision: types of vision loss and common effects on activities of daily life. In R. Manduchi & S. Kurniawan (Eds.), Assistive technology for blindness and low vision (1st ed., pp. 59-79). London: CRC Press.
- Manduchi, R., & Kurniawan, S. (2017). Assistive technology for blindness and low vision. London: CRC Press.
- May, M., & Casey, K. (2017). Accessible global positioning systems. In R. Manduchi & S. Kurniawan (Eds.), Assistive technology for blindness and low vision (1st ed., pp. 81-103). London: CRC Press.
- Mello, M. A. F., & Mancini, M. C. (2007). Métodos e técnicas de avaliação nas áreas de desempenho ocupacional. In A. Cavalcanti & C. Galvão (Eds.), Terapia ocupacional- fundamentação e prática Rio de Janeiro: Guanabara Koogan.
- Merriam, S. B. (1998). Qualitative research and case study applications in education. San Francisco: Jossey-Bass Publishers.
- Monteiro, M. M. B., & Carvalho, K. M. M. (2013). Avaliação da autonomia em atividades de leitura e escrita de idosos com baixa visão em intervenção fonoaudiologia: resultados preliminares. Revista Brasileira de Geriatria e Gerontologia, 16(1), 29-40.
- Organização das Nações Unidas – ONU. (2006). Convenção sobre os Direitos das Pessoas com Deficiência Nova York.
- Pache, M. C. B., Costa, A. B., Souza, S. R., & Negri, L. H. (2020). SpeakCode: uma ferramenta de acessibilidade para pessoas com deficiência visual. Revista Brasileira da Educação Profissional e Tecnológica, 1(18), e7934.
- Sausen, F., & Frozza, R. (2022). Aplicativo para auxiliar pessoas com deficiência visual no reconhecimento de cédulas de dinheiro em Real com a técnica de Redes Neurais Artificiais. Revista Brasileira de Computação Aplicada, 14(3), 1-16.
- Silveira, C. S., & Dischinger, M. (2017). Orientação e mobilidade de pessoas com deficiência visual no transporte público: discussões através de grupo focal nacional. Revista Projetar - Projeto e Percepção do Ambiente, 2(3), 124-134.
- Simons, H. (2009). Case study research in practice. Los Angeles: Sage.
- Stake, R. E. (2006). Multiple case study analysis. New York: Guilford.
- Stelmack, J. A., Rosenbloom, A. A., Brenneman, C. S., & Stelmack, T. R. (2003). Patients’ perceptions of the need for low vision devices. Journal of Visual Impairment & Blindness, 97(9), 521-535.
- Sonza, A. P., Salton, B. P., & Carniel, E. (2016). Tecnologia assistiva como agenda de inclusão de pessoas com deficiência visual. Benjamin Constant, 22, 21-39.
- Teodoro, M. A., Rodrigues, A. C. T., & Baleotti, L. R. (2023). Ensino de tecnologia assistiva nos cursos de graduação em terapia ocupacional do Estado de São Paulo. Cadernos Brasileiros de Terapia Ocupacional, 31, e3424.
- Thomas, R., Barker, L., Rubin, G., & Dahlmann-Noor, A. (2015). Assistive technology for children and young people with low vision. The Cochrane Library, 2015(6), 1-28.
-
World Health Organization – WHO. (2019). ICD-11 for mortality and morbidity statistics. Version: 2019. Recuperado em 23 de janeiro de 2024, de https://icd.who.int/browse11/l-m/en
» https://icd.who.int/browse11/l-m/en - Yin, R. K. (1994). Case study research design and methods: applied social research and methods series. Thousand Oaks: Sage Publications Inc.
- Yin, R. K. (2014). Case study research: design and methods. Los Angeles: Sage.
- Zen, E., Siedler, M. S., Costa, V. K., & Tavares, T. A. (2023). Assistive technology to help the interaction between visually impaired and computer systems: a systematic literature mapping. ISys - Brazilian Journal of Information Systems, 16(1), 6:1-6:27.
Edited by
-
Section editor
Profa. Dra. Carolina Rebellato
Publication Dates
-
Publication in this collection
04 Nov 2024 -
Date of issue
2024
History
-
Received
23 Jan 2024 -
Reviewed
14 Feb 2024 -
Reviewed
19 May 2024 -
Accepted
28 July 2024


