Main Body

7 Rethinking Accessibility, Assistive Technology, and Universal Design: A Collection of Tools and Thoughts

The content in this chapter represents the culmination of my fall research and thinking about accessibility.  It’s rough, but it’s a start. For a very short overview of the culminating project within the broader context of the DFW, you might enjoy the 1 minute overview video below.

 

 

Since the video has no voiceover, a decision that I hoped would highlight a few of the accessibility issues I touch on in the project, I’ve also recorded a transcript.

 


Project Prologue

Two years ago when I was teaching at UW-Bothell, I took a Canvas course on Universal Design. Subsequently, I began to revise PowerPoint presentations with an eye towards more visuals and less text; I incorporated additional film clips and songs into class, the latter generally with captioning, the former paired with lyrics. For course assignments, I made directions available in multiple formats, and despite my increasing environmental guilt, always ensured that hard copies were available. Nonetheless, as a teacher who had worked extensively with first generation and international students, and as a woman who had identified since 2009 as hard-of-hearing, with two cochlear implant surgeries and hearing aids to always remind her, I was increasingly aware that my attempts to design for “all” (quotations to emphasize the problematic term) were well-intentioned, often well-received, but not always successful. This quarter, my DFW and a final assignment for my Collection Development course have offered a rare opportunity to re-think this cluster of issues and tackle instructional design with a more inclusive, equitable, and critical eye.

DFW & Ideation Background: Embedded Disability Discussions

My fall quarter DFW at UW-Seattle with subject librarian Elliott Stevens has focused on library instruction in four areas of English Studies: information literacy; general research support; podcast creation; and digital narrative design. Accessibility has been a crucial component of this work, particularly insofar as helping students easily discover relevant resources and technologies. But the podcast and digital story sessions have also emphasized accessibility in another sense: transcripts should ideally accompany any podcast; sound equalization matters greatly for listeners; digital narratives must be captioned; and the platform we’ve used for these narratives, WeVideo, though remarkable, stymies screen readers and students who cannot use a mouse. Thus, one of the explicit DFW goals was oriented towards investigating accessibility in a Disability Studies context, since, as Elliott pointed out, the Digital Humanities (DH) was not fully grappling with these issues, and in some cases, digital projects and critical writing about them have occluded investigations of accessibility.

I initially approached this final assignment as an opportunity “to build a small collection of accessibility and/or disability online sites and/or tools which ideally could be added to Verletta Kern’s ‘Open Access’ research guide” (aka, “lib guide”) (Email to Helene, Nov 2019). What I’ve done, though, is amassed a small collection of assistive technologies which can be appended to any research guide and incorporated into library sessions with the following goals.

 

Learning Goals

1. Making ablebodied students more aware of the various struggles those of us with a range of disabilities—physical, cognitive, perceptual, etc—may encounter while working alone and with others, especially, not not solely, when creating digital projects.

 

2. Ensuring that students who identify as disabled and/or neurodiverse, and those who might not yet be sure, can easily access tools that will make their time in the library a productive, collaborative experience rather than an isolating, infuriating experience.

 

3. Helping librarians find better ways to connect students with resources; recognizing that ADA compliance and/or accessibility training doesn’t end conversations about accessibility, but, in fact, starts them; and constructing more inclusive, equitable teaching sessions.

 

Ultimately, these goals may help all users think more critically about Universal Design principles since they are not always oriented towards “cross-disability and intersectional approach[es]” or discourses (A. Hamraie, 2018, p. 461). In the end, making a small assistive toolset accessible will hopefully inspire more nuanced, collaborative design for accessibility, bringing students, librarians, and other stakeholders into meaningful interactions that ultimately result in equally meaningful changes within and beyond libraries.

Brief Literature Review

Writers, including X. Wu (2010) and G.H. Williams (2012), highlight some of the notable principles of Universal Design (UD), including its investment in exposing the limitations of ADA guidelines and compliance in built environments (Williams, p. 204-205). Wu, in particular, enumerates the ways UD tenets ensure flexibility and recognition for multiple ways of learning in classrooms, while Williams is more concerned with the Digital Humanities’ inability to translate some of these basic tenets to platforms and projects, and, consequently, offers suggestions to surmount these barriers.

E. Ellcessor’s (2018) perspective enhances and augments Williams’, by discussing the ways that disability is often overlooked in discussions about digital tools and media. The writer takes issue with UD doctrines: “There is no perfect design for all people, with and without disabilities, and some people will always require specific accommodations for specific tasks. Too strong a focus on universal design can obscure these needs and even render disability (and accommodations) invisible” (Ellcessor, 2018, p. 111). Recently, feminist, design, and critical disability scholar A. Hamraie (2018) has offered a critical look at UD, noting its short comings and offering a rallying call for more intersectional and cross-disciplinary design approaches. Hamraie’s discussion of the Vanderbilt digital mapping project, which seeks to create a shared vision of a more inclusive and equitable campus, offers an especially insightful and nuanced view of how UD, when pushed into more collaborative, diverse, and knowing realms, can become a more effective.

Assistive technologies, then, within this small body of literature, seem to inhabit an intriguing space. On the one hand, S. Hendren (2018) who, in tangent with Williams, problematically declares that all technologies are assistive, implies that since cell phones and glasses may be categorized in this manner, hearing aids and screen readers, are, by extension, equal to them. In this sense, assistive technologies are part and parcel of UD as all four of these tools can benefit sighted and unsighted, hearing and hearing-impaired persons.  On the other hand, Ellcessor’s stance is helpful in pointing out that these technologies are not, in fact, parallel to each other. To believe that all technology is assistive is to obscure the embodied aspects of disability: we need these additional tools, beyond our cell phones and glasses, to navigate an ablebodied world which remains resistant to more symbolic forms of communication, such as Braille and ASL (J. Gugenheimer, et al, 2017).

The UW Library Assistive Technologies Quandary

Many UW librarians and staff are witting and unwitting practitioners of basic and more critically-inflected UD principles. Yet, how many subject librarians, reference librarians, and library staff (including graduate and undergraduate staff) at UW can, during an instructional session, swiftly show students how to access screen readers or captioning programs on the school’s PCs and Macs? How many can then fluidly demonstrate best practices for using these tools? The UW Accessibility Technology Center (ATC) maintains that “most” Suzzallo Library computers and all Odegaard Library computers possess programs such as JAWS (screen reader), NVDA (screen reader), Dragon Naturally-Speaking (captioning and transcription), and CLARO Reader (a program which assists those of us who cannot easily read text). A handful of other sites on UW’s campus additionally house computers with these assistive technologies as does the ATC office itself. Nonetheless, testing on four Suzzallo computers and two Odegaard computers this week revealed that ATC needs to update their directions for accessing these tools, should explicitly indicate which platforms they’re available on, and needs to clarify whether CLARO and Dragon are bundled (or whether Dragon has been discontinued).

There are additional barriers to discovering and utilizing these tools, aside from the aforementioned ones. For instance, not all UW students study at the campus’s two major libraries, including graduates enrolled in the Public Policy and Genome Sciences programs. Many graduates and undergraduates who do frequent these sites may prefer to use their own computers. Some students, for mobility-based reasons and others, may actively avoid the  basement of Mary Gates Hall where the ATC and Disability Resources for Students (DRS) offices are housed. Moreover, students may not reach out to the ATC or DRS offices to discover these tools because they are unsure if they have a disability or they’re embarrassed  about asking for help. Others who do identify as disabled and/or neurodiverse might have had difficult experiences with institutional accessibility offices in the past and consequently, actively avoid them.

Ultimately, unless training for librarians and staff is made more transparent, we cannot assume that they know where these computers are located or how to use these technologies. But for so many reasons, librarians are ideal for helping students access them. These reflections, based on working at UW and teaching at several schools, further catalyzed my interest in building a small collection of assistive tools that students can access during library sessions, which may take place within any of the campus’s libraries or classrooms.

Tool Criteria and Analysis

Criteria for Assistive Technology/Tool Selection
  1. No to low cost ($10 or less).
  2. Easy to connect to and/or download (i.e. takes 10 minutes or less).
  3. Available on multiple platforms or possessing equivalents on multiple platforms.
  4. Accessible and assistive for library teaching sessions.
  5. Helpful for a range of users’ embodied experiences.

Securing tools that meet all five standards is impossible, so attempting to discover technologies that meet some of these criteria has been a saner goal. Fortuitously, Elliott informed me about Tota11y, a free visualization toolkit, and NVDA. The former is extremely helpful for users navigating and designing digital projects as it alerts them about problematic contrast, headings, labels, and links. Nonetheless, its screen reader is not as accurate as NVDA, which instantly lead me to the conclusion that more than one tool is always needed for testing. After despairing that the StoryMap I created this summer was entirely inaccessible based on my use of Tota11y’s screen reader, NVDA assured me that while the map images were unreadable, the text I’d written and google points I’d embedded were readable.

However, both tools pose platform quandaries: NVDA works only on Windows and, thus far, it appears that Tota11y on downloads on Google Chrome. In turn, these discoveries inspired me to find and activate the Mac accessibility toolset and spend time with VoiceOver, which, while not as nuanced as NVDA, nonetheless reads the textual portions of my StoryMap. All three tools have also revealed that the majority of my Pressbook is screen readable and the testing process inspired me to double-check my alt-texts and begin composing captions that add to rather than detract from the alt-texts. In turn, my hope is that Tota11y can open up more discussions during library sessions about inclusive design, whether or not it’s taking place on UW DH platforms, and that NVDA and VoiceOver prove useful to a range of users for accessibility and design purposes.

After testing these tools, I realized that the collection would be more balanced if I could find two apps. To this end I began searching for assistive technology collections and happened upon Augsburg University’s site. The two apps I’ve found which seem to offer the most potential “help” for instructional session success are Ava, which provides live captioning, and Claro Scan Pen, which provides screen-reading for photographs that include written text. Ava is not the only live captioning app available and a brief web search shows that it’s not the top-rated app of its kind either. But for myself, Elliott, and my LIS 522 colleague who has hearing dyslexia, it’s a revelation.

Using only my phone’s internal mic, I’ve live captioned conversations in both quiet and noisy areas with unaccented west coast English speakers and one ELL speaker using Ava. As long as the phone is placed close to the speakers, Ava’s captioning is about 90% accurate for unaccented west coast English. Best of all, one can save the transcripts and review them. The ELL speaker (my husband’s cousin) did not fair as well, though Ava provided some hilariously imaginative translations during a brief conversation we had about AI (Artificial Intelligence): “Date to feed the dog,” “Baby boy 44 degrees,” “Whitney Houston sings to bride down the computer,” and “YouTube ass up.” As my forgiving ELL conversational partner put the matter, the machine can only transcribe what humans feed it. Thus, similar results might ensue with various regional US accents.

Despite this major issue and another I address in the annotations, Ava’s accessibility potential is inspiring: students with or without hearing and/or cognitive disabilities could turn their app on when the librarian wanders by during an instructional session, caption what they have to say, and then re-read the transcript at home. In an ideal, technology glitch-free world, with additional low-cost cords, a Bluetooth mic, and advanced planning, a student may be able to harness live captioning via Ava while the librarian is “teaching” the entire class.

Claro Pen Scan, in contrast, is not as miraculous a technology but may, nonetheless, serve assistive purposes. Though its reading of photographs may be hampered by image clarity, one can easily reprogram the speed and style of the screen-reading voice, a feature that’s worth its weight in gold, particularly for those who may also have hearing loss and those of us just learning to use a screen reader. Moreover, if librarians provide a handout that’s difficult to read or the student is unable to or doesn’t want to activate NVDA, VoiceOver, or another screen reader, taking pictures and clicking on the text will provide valuable support.

Finally, inspired by A. Hamraie’s (2018) immensely insightful article and my love of cartography, I decided that the collection could benefit from a map focusing on accessible pathways on campus to navigate one’s way from the library or classroom to the next site. Consequently, I have included the UW Accessibility Map. This information-laden image offers superficial mobility information. Nonetheless, it’s a starting point for more detailed, helpful design and hopefully it can serve as a teaching tool for highlighting the ways in which we can augment our cross-disability thinking. Perhaps in the future, it can be re-coded so screen readers ingest more of its data. Perhaps, in solidarity with Hamraie’s discussion, we can add the following points to the UW map: buildings with headache inspiring lights, places where harmful scents or smells may coalesce, overtly noisy/cacophonous built environments, underrepresented student groups’ meeting spaces, and all gender bathrooms (the latter two to reinforce the vital point that students who identify as disabled and/or neurodiverse do not all identify as white and cisgender).

Gaps and Conclusions

This report is fairly incomplete and undoubtedly overlooks and/or simplifies significant points about the assistive technologies in this collection. In tandem, questions arose after I presented on this project, and, of course, new queries have plagued me over the past few days. Is there a single site on the web that Tota11y believes has sufficient contrast? How well does NVDA read pdfs, particularly in comparison to VoiceOver? How many ways are there to use the Claro Scan Pen app? Since Ava has notable imperfections, which other no to low-cost live captioning app might be more helpful to highlight in this collection? Is the Seattle Access Map a more user-friendly tool for navigating campus than the UW accessibility map, since the former can show how steep certain routes are? Due to the ever-present time limitations the quarter system offers, I have answers to none of these questions, though this project has been nothing short of inspirational for continuing to delve into assistive technologies and become a passionate advocate for integrating them into library instruction sessions.

As a somewhat uncanny culmination to this project, earlier today (December 6, 2019) the UW Libraries Storytelling Fellows Program sent out an email, notifying WeVideo users that the institutional subscription will end on December 23rd. The email cites glaring examples of platform inaccessibility and indicates that when a report on these matters and accompanying recommendations were presented to WeVideo, staff were “unresponsive or dismissive of . . . requests for accessibility.” As of the moment, “the video digital storytelling workshops offered by the Libraries is on hiatus” as staff search for an accessible alternative. This commitment to accessibility is a commendable step in the proverbial right direction, and one can only hope that it ushers in a new era of equitable, inclusive library instruction and support at UW, informed by critical approaches to universal design and an unfailing commitment to collaborating and co-creating with a range of users.

References

Ellcessor, E. (2018). A Glitch in the Tower: Academia, Disability, and Digital Humanities. In The Routledge Companion to Media Studies and Digital Humanities. Edited by Jentery Sayers, pp. 108–116. New York: Routledge.

Gugenheimer, J., et al. (2017). Association for Computing Machinery. The Impact of Assistive Technology on Communication Quality Between Deaf and Hearing Individuals, 669–682. Portland, OR.

Hamraie, A. (2018). Mapping Access: Digital Humanities, Disability Justice, and Sociospatial Practice. American Quarterly, 70(3), 455–482.

Hendren, S. (2018). All Technology is Assistive: Six Design Rules on Disability. In Making Things and Drawing Boundaries: Experiments in the Digital Humanities. Edited by Jentery Sayers, pp. 202–212. Minneapolis: University of Minnesota Press.

Williams, G. (2012). Disability, Universal Design, and the Digital Humanities. In Debates in the Digital Humanities. Edited by Matthew K. Gold, pp. 202–212. Minneapolis: University of Minnesota Press.

Wu, X. (n.d.). Universal Design for Learning: A Collaborative Framework for Designing Inclusive Curriculum. Inquiry in Education, 1(2), n.p.

Appendix A: Assistive Technology Annotations

 

Assistive Technologies: In Brief

Ava: Ava is an app which provides 5 hours of free live captioning. It can be used on an Android or iOS and, thus, downloaded through the Google Play store or the Apple Store. The app’s captioning with unaccented English is roughly 90% accurate. Problematically, its accuracy declines with some accented English speakers; however, the app can be re-programmed in several languages, including Russian, Chinese, Farsi, Spanish dialects in a range of countries, and Australian English. Assistively, users can save transcripts and review them later, and Ava also enables individuals to connect with other app users. Unfortunately, the app drains one’s phone quickly and updates appear every 2-4 days. Furthermore, users receive emails daily from the app’s avatars, and its privacy policy, though transparent, nonetheless raises questions about data collection.

Claro ScanPen: Claro ScanPenn is a free app which can be used on an Android, iPad, iPod, or iPhone; like Ava users can download it through the Apple Store or the Google Play store. By taking a clear photograph of text and highlighting it with one’s finger, Claro can read it aloud. Importantly, one can easily reprogram the voice speed, voice gender (“man” or “woman”), and voice pitch for personal preference; additionally, no internet connection is required to activate the voice reading. Unfortunately, if an image isn’t clear, the text cannot be fully read, and text appearing on artistic creations (such as posters) may not be accessible to the reader either.

NVDA (Non-Visual Desktop Access): NVDA is a multiple award-winning free-of-charge Windows-based screen reader. It downloads easily on a PC, and, from this writer’s experience, is ready to use instantly. Very little text, from recent exploration, is unreadable, and it differentiates between links, headings, and written text in a mostly clear manner. Moreover, it has been internationally embraced and is thus available in several languages; perhaps best of all, users who can code are capable of enhancing the program (“Our Story”). The primary difficulty NVDA may raise is that it’s not entirely clear, once downloaded, how to reprogram the voice speed and pitch.

Tota11y: Developed by the Khan Academy, Tota11y is free visualization toolkit. It downloads instantly on Google Chrome, and users can then click on the Tota11y icon to test out the visual accessibility of websites and web-based platforms, such as WeVideo. To the toolkit’s credit, it’s easy to use and both the Tota11y site, as well as the icon, provide a succinct overview of effective and ineffective visual accessibility basics, including headers, link text, and contrast. Furthermore, its small screen-reading wand allows users to gain a general sense of text readability. Nonetheless, the wand is not fully accurate, so users will need to also employ a screen reader. Additionally, users who can’t code or do not have any control over a specific platform or site could feel a bit helpless to amend instances of the visual inaccessibility.

VoiceOver: Mac’s built-in screen reader is effective and quickly accessible from the apple menu. Users will find a range of ways to modify the screen reader’s pitch, speed, and gender as well as other features including its braille verbosity and cursor thickness. However, there are a few website details and minor digital project/platform details which it does not read as extensively or as indepth as NVDA (from this writer’s experience).

UW Accessibility Map: The University of Washington’s accessibility map can be found on the “Access Guide” tab of the UW Facilities page. Through a series of magenta, green, and blue dots, as well as three different line types, it marks assisted, manual, and multi-level entrances as well as paths without stairs and paths wheelchair users may navigate. Additionally, the map highlights elevators and accessible parking. Though the “Access Guide” page claims that the map shows “accessible restrooms,” these do not appear on the legend; they are only visible on the building information (most, but not all buildings, can be highlighted). Unfortunately, the map is not fully screen reader accessible; Tota11y’s screen reading wand and NVDA as well as VoiceOver pick up only the highlighted building points, which may be difficult to find without other contextual information for orientation. Finally, due to the lack of detail, it’s difficult to see how far elevators are from assisted entrances.

 

 

License

Icon for the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License

Academic Library Instruction Copyright © 2019 by rebwn is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, except where otherwise noted.

Feedback/Errata

Comments are closed.