When we think of computer technologies such as hardware what often comes to mind are terms such as motherboards, ram, and cpu. Yet people with disabilities such as autism spectrum (ASD), dyslexia, and blindness, might think of entirely different forms of hardware.The hardware associated with assistive technologies exists to compensate for neurological processing differences. By bridging the gap between the brain and the physical world commonplace tasks which would be difficult for disabled individuals become easier. This results in increased independence.
In order to learn more and share my research a about assistive technologies, I attended a symposium at Landmark College in Putney Vermont. The keynote speaker, Ben Foss who invented the Intel Reader, discussed his recently published book on dyslexia (3) and the technologies that allowed him to comprehend written materials throughout his time as a law student at stanford university. The other headline speaker I was able to see, Dr. Mark Hakkinen, discussed his research on assistive technologies for the blind. I presented my own work which involves working as a researcher in the Keene State College psychology department with Dr. Lawrence Welkowitz; we are in the process of developing an ipad application called SpeechMatch (7). The app teaches individuals with ASD how to match the patterns and rhythm of speech because matching another person’s speech during conversation is a deficit associated with autism.
During Mr. Foss’s lecture he placed a heavy emphasis on the idea that written language is not the only way to communicate. He explained that we as a society develop biases towards certain ways of sharing information. Because of his dyslexia this bias initially placed him in a disadvantaged position. Computers helped him overcome this by presenting software that could read text aloud. While many already know of software that speaks written passages, what was more striking was the method in which he used it to read.
Ben explained how using speech recitation software is different from reading from an analog medium such as a news paper. Someone browsing the cover page of the Times is reading geographically through the sections, selecting only the section they want to learn more from. Though for someone who is blind using speech recitation software they have to listen to the entire page until they find their desired article.
A normal person would think the computer should read off text at a normal pace but, Ben explained how people learn to listen to passages sped up at a faster rate. He demonstrated this to the audience by initially presenting material at the fastest possible level through his iphones reader; no one could understand it. Next he presented it in the slowest way possible; no one could understand this either. Then he progressively presented the material in increments; this time throughout every presentation we were able to grasp the material even when the text speed was again at the fastest rate possible.
We take for granted the way in which our own brain naturally processes material. In many ways our sense organs are like hardware wired to relay information to the software of the mind which produces thoughts, emotions and actions. In this way our behavior has inputs and outputs much like a computer.
Dr. Mark Hakkinen extrapolated on these ideas by teaching us about sonification, the process of using sound to present data. An example of this is an audio graphing calculator which announces functions to the user as they do their math. He explained that 3d printing can also help the blind learn science related concepts by providing them with models that they can feel such as a 3d printed model of a molecule. These models are becoming increasingly popular, and are available to individuals with access to 3d printers. Websites like diagramcenter.org (4) are developing standards for these assistive models so that they can be consistently downloaded. With time I feel that these models will become increasingly popular.
Some of the more cutting edge technologies available to the blind are Haptic. Haptic refers to one's sense of touch. Dr. hakkinen explained that while Hardware such as the Novint Falcon 3d force, where initially developed for gaming, they can be used by the blind to develop a greater comprehension of scientific concepts in chemistry where they could not visualize different states of matter. Another technology, piezoelectric motors provide Vibrotactile feedback to the blind. This means that small motors vibrate when their finger touches a tablet programed to vibrate when they touch a certain area. This can be used to teach the blind geometry because they are notified when they are touching the area of a shape or line.
Electrostatic haptics are even more interesting and utilize static electricity to allow an individuals touching a tablets screen to feel the texture of an image. The example, Dr. Hakkinen used was a picture of a monkey on an android tablet that felt like a monkey, its fur feeling different depending on where you touched the image. He explained that this technology is not being developed exclusively for the disabled but will have a broader commercial application in the future. Google glass was also presented as a way to assist the deaf (4). It has the capability to caption what an individual is saying to the user a technology being developed at georgia tech. Counterintuitively a blind user of glass could take advantage of qr codes in public locations instead of braille.
Dr. Hakkinen’s key point was that the universality of technology determines its success. In this sense the more uses that an application has for various purposes the more likely it is that it will be purchased. Content must also be able to be easily transferred between platforms to increase the potential of its respective technology. He asserted that Standards must be in place to ensure this success.
I quickly realized that when presenting our application that people are more likely to be interested in an app if they themself can notice novel applications of it. As an example I met a language teacher who thought that our app could be used as way to help students remember sentences in a foreign language.One individual interested in dramatic theater thought it would be a good way to help people remember the line for theirrole in a theater production. Both of these people saw a different use for our app that I did not realize. I would like to expand on Dr.Hakkinen's by asserting that “universality” is in the eyes of the beholder. I feel a potential consumer can be primed to comprehend universality through a physical experience with the technology. Our app shines the brightest when someone attempts to use it for himself.
When Speech Match launches the user is prompted to create a profile. Next they are presented with a series of pre-recorded wav. files that contain sentences in happy, sad and neutral speech tones. The user is able to visualize the sample file with a line that takes an average measure of the sound files characteristics such as rhythm, loudness and pitch.The goal is for the user to match this average pre-recorded line by copying the prosody of the prompting sentence. This is done by speaking into the mic of the pad. The app then analyzes how well the user was able to match the sample and provides visual feedback, creating a representation of what they user said into the mic. This response also has an average measure of rhythm, loudness and pitch. Comparing the average lines representing the spoken sentence, the user is able to identify where they need to improve thier prosody.
In this way people were curious if they would be allowed to modify the app to fit their needs. Both the language teacher and the thespian wanted to be able to access speechmatche’s program files to insert their own audio data. In this way the app would no longer be for individuals with ASD, but a novel platform for presenting information.
I am curious as to why the majority of content on the web and in app stores is not open sourced. Not allowing hardware, and software to be sourced to the public is detrimental to human progress. Furthermore it is paternalistic, assuming the public is incapable of improving technology. In the way that assistive technology is supposed to foster independence should not all technologies be open for modification?
I was lucky enough to be able to try out a virtual reality helmet called the, Oculus Rift. For me It simulated the inside of an italian villa quite realistically. Yet this hardware is even more amazing now that their is an open source developer kit available for individuals to create new simulations (1,8). This can enable us to teach concepts via simulations. What if we could train surgeons how to operate, create visual concepts in physics, or recreate eyewitness accounts of historical events? How would this change the way the general population thought about the world if such technology were readily available?
In the future I feel that it will be essential that public be allowed full access to the developmental processes related of technology. We need to promote accessibility not just for individuals with disabilities but for healthy individuals as well. People need better outlets from which they can be informed of emergent technologies so they can determine their practical uses for themselves as opposed to being instructed as how to they should “properly” use them. The less restriction is placed on use the better.
Additionally Information about new technologies currently only becomes available to the privileged who are wealthy and educated. Instead of limiting the spread of knowledge about emerging technologies, to private symposiums, google conferences and labs, we should find ways of increasing public awareness and computer literacy. According to moore’s law technology expands at an exponential rate, yet as humans are evolution has not enabled us to upgrade our neurological abilities as quickly. In the way assitive technologies bridge gaps between the brain and the external world, so too must we bridge the gap between our abilities and the power of technology.
References
1)Dingman, H. (2014, September 19). Oculus open-sources original Rift developer kit's firmware, schematics, and mechanics. PCWorld. Retrieved October 4, 2014, from
2)http://www.pcworld.com/article/2686562/oculus-open-sources-original-rift-developer-kits-firmware-schematics-and-mechanics.html
3)Foss, B. (2014). The dyslexia empowerment plan: a blueprint for renewing your child's confidence and love of learning. New York: Random House.
4)Georgia Institute of Technology. (2014, October 2). GT. Retrieved October 4, 2014, from http://www.news.gatech.edu/2014/10/02/researchers-create-software-google-glass-provides-captions-hard-hearing-users
(5)Making it easier, cheaper, and faster to create and use accessible digital images. (n.d.). diagramcenter. Retrieved October 4, 2014, from http://diagramcenter.org/
6)Neilsen, E. (2014, June 9). Keene State College professor making strides in autism research. SentinelSource.com. Retrieved October 4, 2014, from
7)http://www.sentinelsource.com/news/local/keene-state-college-professor-making-strides-in-autism-research/article_31fd5102-1c75-5a02-b3fd-2ce05492194d.html
8)Patel, N. (n.d.). OculusVR/RiftDK1. GitHub. Retrieved October 4, 2014, from https://github.com/OculusVR/RiftDK1