John Sprogis
24 min readMay 30, 2019
Want to see the future?

Artificial Intelligence and Special Needs

Artificial Intelligence cound be a game changer for Special Needs

The field of technology is continuously developing, such that computer power is now a part of many aspects of human life. Scientists and researchers can now create computer systems that can perform tasks that typically require human intelligence, such as speech recognition, visual perception, translation between languages, and decision-making. Such advancements in technology are called Artificial Intelligence (AI).

Artificial intelligence is intelligence shown by machines, as opposed to the natural intelligence (NI) that humans and other animals exhibit. Artificial intelligence is a branch of computer science that refers to the development of computers that can do things that are normally done by humans, particularly things that are associated with humans functioning intelligently. In other words, artificial intelligence is used when a machine imitates “cognitive” tasks associated with human minds, such as problem solving and learning.

AI can either be in software form that has “intellectual” capacity, such as a virtual avatar or in a physical form, such as humanoid robots.

History of Artificial Intelligence

The study of “formal” or mechanical reasoning started with mathematician and philosophers during the ancient times. The theory of computation by Alan Turing was influenced by the study of mathematical logic. This theory indicated that a machine can simulate any possible act of mathematical deduction by shuffling symbols (letters and digits), such as (0) and (1). This insight is called Church-Turing thesis — a hypothesis that digital computers are capable of simulating the processes of formal reasoning.

With concurrent breakthroughs in neurobiology, cybernetics and information theory, researchers formed a theory that it is possible to build an electronic brain. Turing suggested that a machine can be considered “intelligent” if a human cannot differentiate the responses from a human and a machine. Walter Pitts and Warren Sturgis McCulloch formally designed Turing-complete “artificial neurons” in 1943 and that is now considered as the first AI.

In 1956, AI research was founded at a workshop in Dartmouth College. It was attended by Arthur Samuel, Marvin Minsky, John McCarthy, Herbert Simon, and Allen Newell, who became the founders of AI research. Along with their students, they developed programs, such as computers speaking English, proving logical theorems, learning checker strategies, and solving word problems in algebra.

After several waves of optimism, the field of AI suffered disappointment and loss of funding, but there were also periods of renewed funding, new approaches, and success.

At present, AI techniques continue to advance in the wake of concurrent developments in computer power, theoretical understanding, and large amounts of data. AI has become a valuable aspect of the field of technology, helping to find solutions in many problems in computer science.

The Power of Artificial Intelligence

AI has been a part of the human life for many years. The technology is responsible for picture library searches for a certain person or location, suggested responses on Gmail, and auto-captions on Facebook. It makes services like speech-to-text possible. It is also making improvements on understanding different voices.

It brought innovations, such as Alexa, Cortana, Siri, and other voice recognition services, which are increasingly becoming popular. In 2017, Amazon had record sales of their Alexa device, selling millions during the holiday season. According to Google, since October 2017, the company is selling one Google Home (smart speaker and voice assistant) every second.

The trend continues into 2018. AI has also made its way into pop culture, which shows how important it has become in the daily life of people. It makes voice assistant-enabled services so easy to use. Some people are also seeing the potential of AI in culture and entertainment.

AI provides users with exceptional levels of convenience and connectivity and also save energy. There are smart devices that use AI technology to detect whether appliances must stay on and adjust settings or turn them off as necessary.

Smart technology also uses AI to provide safety. Smart locks determine who can enter the house when the owner is not at home. Smart smoke alarms are capable of pointing people to the spot of a potential fire. Devices like smart security cameras can use voice technology and AI to help monitor the house remotely.

Whether in robot, speaker, or vehicle form, AI has started to revolutionize people’s lives, including accessibility for individuals with special needs and the aging population. Caring for people with disabilities and the elderly require dedication and time. However, because of the connectivity and capabilities of AI, providing support for them will not necessarily require proximity.

Artificial Intelligence in Special Educational Needs

AI in Education

In the educational field, there is a rise in the trend to improve the accessibility of education. Many studies are currently dealing with the application of computers in education with the aim of developing learning environments that help the learning process in various settings. Many years ago, numerous specialists and researchers of computer science have begun studying the application of Artificial Intelligence techniques in education.

A few years after Artificial Intelligence in Education (AIE) was developed as a distinct academic discipline, there have been some changes in the nature of technology. AI techniques in the educational sector were considered to increase positive interactive experiences and create effective learning environments for students. AI applications in education include intelligent tutoring, autonomous agents, natural language processing, knowledge representation, and so on.

AI in Special Education

The value of artificial intelligence in education has been recognized for many years. A research group of AI in education deals with the connection of AI and Special Education. Little by little, AI applications have been used to enhance the life of people with special educational needs.

The subject of “Special Educational Needs” deals with numerous difficulties that can lead to problems throughout the learning process. While many terms regarding special educational needs were created in the past, experts in this academic discipline have not completely come to an agreement. Some terms, such as “Learning Disabilities” and “Learning Difficulties,” are also commonly used. The progress of the children with special educational needs is not the same and every child is unique, with different needs and strengths. Because of this, it is necessary to understand the environments where they interact so as to assess and develop technology to effectively support learning through multiple subjects, contexts, and time.

The latest development in the field of AI and Special Education can allow collaborative interactive environments to develop and make life easier for people with special educational needs and those who are around them. The subjects that are presented here are about analytical and intervention tools of several learning disabilities. These can be used by special educators, teachers, therapists, psychologists, and even parents.

Because of the implicit qualities of learning difficulties, the comorbidity of their symptoms, and their extreme similarities, AI evaluation tools can be a way of improving the capabilities of parents or teachers when evaluating the child. They can use these tools to examine the academic level of the child and if necessary, make the right choice of informing a specialist whenever there is any difficulty. Children with special educational needs must have AI training interventions as part of their education as they can integrate the child’s freedom of action with a more precise guide and control

Artificial Intelligence in Special Education for Physical or Sensory Impairments

Students with more complex or long-term physical impairments should receive educational services that can help in upholding their well-being and independence, and leading the most fulfilling life possible. Most cases of sensory and physical impairments are evaluated by doctors within the first years of the child’s life. This is the reason artificial intelligence applications that involve teachers and parents are typically aimed at training students instead of diagnosing their needs.

In 2003, Voula C. Georgopoulos, Chrysostomos D. Stylios, and Georgia A. Malandraki presented a fuzzy cognitive map or a soft computing model to develop an intelligent method for differential diagnosis of SLI or Specific Language Impairment. A fuzzy cognitive map consists of fuzzy logic and cognitive mapping that uses symbols for the modelling and description of complex systems. SLI is difficult to be determined because it has similar signs and symptoms to other disorders. Therefore, with the help of this tool, specialists will have differential diagnosis of SLI from autism and dyslexia. Four clinical cases have tested this model and shown successful implementation.

Also in 2003, Maria Doina Schipor, O. A. Schipor, and Stefan Gheorghe Pentiuc used a fuzzy expert system to design a Computer Based Speech Therapy or CBST system to help students with hearing and speech impairments.

This clinical tool aims to propose optimal therapeutic actions for each student based on the selected information. They created an improved CBST system and named it LOGOMON — Logopedics Monitor. The fuzzy expert system that they used was based on forward chaining, wherein they were able to develop its classical architecture. The expert system functions as the storage of the precise progress and evolution of every child and adapts the exercises to their progress and current level.

The validation of this system was performed in the course of a three-month experiment that involved two groups of children from the Regional Speech Therapy Center or RSTC in Suceava, Romania. Both groups used LOGOMON, but they deactivated the expert system for the first group, so every therapeutic decision was taken by the speech therapist. Along with LOGOMON, they used interference facilities, so both the expert system and speech therapist provided therapeutic decisions. The results of the experiment of these two groups did not show significant difference, but the use of the expert system had advantages, such as explanation of the results, predictability, and more therapy time.

In 2008, John Pavlopoulos, Athanasios S. Drigas, John Vrettaros, and George Vouros developed an assessment system for the learners using Neural Network approach and optimized using Genetic Programming. This method aims to evaluate the answers of the user from single and multiple questions within an e-learning environment.

It evaluates five areas of learning: reading, writing, spelling/vocabulary, grammar/sentence structure, and letter recognition and alphabetical order. The use Genetic Programming Neural Network (GPNN) as a method of e-learning is effective for learners that have difficulties in the aforementioned areas, but can also be appropriate for people with sensory or physical impairments. This approach has successfully evaluated the answers of the users and the generalization of the system can later result in the innovation of an intelligent e-tutor.

Artificial Intelligence Innovations for People with Speech Impairments

Voiceitt — AI Speech Recognition Technology for People with Speech Impairments

Voice controls and speech recognition have been created for standard speech. They largely change industries and computer-human interfaces. Voice systems are being added to business applications, industrial robotics, medical devices, and vehicles. The problem with them is that they leave out people with speech impairments and sometimes, people with a strong accent.

In 2017, a startup company based in Tel Aviv was developing an AI-powered speech recognition technology called Voiceitt to translate not easily intelligible speech into clear words. It was co-founded by Danny Weissberg and Stas Tiomkin. Voiceitt’s core mobile application is now in closed beta testing. It is a hands-free voice recognition app for real-time communication. The technology is designed to integrate into smart speakers, smart homes, and Assistive and Augmentative Communication devices.

Upon opening the Voiceitt app, it will ask the user to compose short, useful sentences and then read them out loud, such as “Turn off the lights” or I’m thirsty.” The app records and starts to learn the particular pronunciation of the speaker. If the user cannot type words into the app, a caregiver can do it for them.

There will be a brief training period and then the Voiceitt app will turn the statements of the user into normalized speech. The output will be produced instantly and be in the form of text or audio messages. Voice-controlled devices and apps can easily understand the written or audio messages that have been newly generated. Voiceitt also aims to help people with speech impairments to have a face-to-face communication with other people.

Weissberg said that the idea for Voiceitt came about when his grandmother had a stroke that caused her speech impairments. He said that people do not give much value to speaking as a form of communication, but losing it is really terrible and one of the most challenging aspects of the recovery from stroke. He did not think about starting a company right away, but he began talking with occupational therapists and speech therapist. He also started to study everything he can about it and find out if there is a market need for it.

The pilot customers of the company are schools and hospitals, and people who have speech impairments due to health conditions, such as Autism, Down syndrome, cerebral palsy, and Parkinson’s disease, or those recovering from stroke or a traumatic brain injury.

Weissberg also said that Voiceitt can be an extension of accessibility to speech recognition for anybody, including companies like Amazon, Google, IBM, Microsoft, or Apple. He would love Voiceitt to be a major player in the industry and work with every major platform.

AI App Accessibility Tools for the Blind

Aipoly — AI App for the Blind

A major focus for innovation that has gained more attention recently is how technological inventions can help people with disabilities. Among some of these inventions that want to help make life more accessible for the disabled is an app called Aipoly. It uses AI to ‘show’ blind people their surroundings.

For centuries, technologies have been created to help people with disabilities. Some examples are hearing aids for the deaf (became popular in the 1800s as ear trumpets before developing into cochlear implants in the 1900s) and Braille for the blind (invented by Louis Braille in the 1820s). It is not surprising that human inventions are targeting individuals with different capabilities. However, the current trappings of the industry, such as smartphones, 3D printing, and AI systems, are helping the relationship between technological innovations and the needs of disabled people to reach new heights.

Aipoly was launched in 2015 and has gained attention after winning the Innovation Award at the 2017 Consumer Technology Association Awards. A lot of people have been talking about it for several years due to its simple premise that is based on extremely complex technology — after downloading the app on a smartphone, the user can use the app by pointing the smartphone at something, and the app will identify the object.

Although this may look like an easy task, it is an AI-powered system that uses the same machine-learning technologies as robotics and androids. Human beings naturally recognize objects by identifying them through visual, aural, and other means. People can easily learn how new things look and can also remember their name and categorize things that look similar. It is extremely difficult to create the same capability in computers. Recognition technology is now common in platforms like the photo-tagging service in Facebook, which tries to select face shapes from photos.

However, the Aipoly app targets much bigger as it wants to identify objects right away after the smartphone “sees” them, and inform the user what they are, no matter where they are.

This can be really tricky as things may look differently under different lighting, if their situation change, or if they move; a lot of objects that look different can have the same classification; and choosing objects from backgrounds that are visually confusing can be difficult even for people with clear vision. However, Aipoly uses various recognition technologies that are focused on identifying people, foods, household objects, packaging in stores, kinds of plants, etc. These systems also allow the user to customize objects, so that the App can recognize particular things or products that are relevant to them.

Aipoly was created by Alberto Rizzoli and Marita Cheng during their time at Singularity University, a benefit corporation in Silicon Valley. They introduced it in 2015 with a massive “convolutional neural network” that intended to identify the objects in every picture and how they are related. Since then, the app is becoming more accurate as people who use it can inform the app when it made a mistake to help improve its database.

Systems like this are increasingly becoming popular. Fast Company reported in 2016 that there is an increase on this kind of technology in handheld devices. Aipoly’s innovation and iPhone updates use visual recognition technology to identify people, places, and more in the pictures taken by the phone. For audio, this kind of technology has been around for some time. An app called Shazam can identify songs and may also pull up the lyrics. However, AI technology that is specifically for the blind could be life-changing as it helps them navigate their surroundings without getting help from other people.

However, there could still be limits to this app. For example, a lot of blind people are able to navigate well using touch, and although not yet completely explored, the real benefit of this app to them is to help them differentiate the things that feel the same. Moreover, the app may not react immediately, so the user has to briefly wait for it to recognize and distinguish the object.

OrCam MyEye — AI App for the Visually Impaired

Identifying the potential transformative effects of AI for the visually-impaired, OrCam, a company-based in Israel, aims to use AI technology to help improve the lives of blind and visually-impaired people. MyEye, the flagship product of OrCam, was invented by Ziv Aviram and Amnon Shashua, cofounders of Mobileye, a company that spearheads the development of vision technology that assists driver to avoid collision. Using advanced AI technology, AI help visually-impaired people to read words and texts, identify faces, differentiate between products and brands, and identify money by its denomination.

According to Gene Gurevich, Director of Policy and Business Development, OrCam aims to improve the lives of blind people, visually impaired, and those who have a reading disability through advanced artificial vision technology. MyEye is small wearable device that can read printed text from any source, such as newspapers, books, food labels, street signs, and restaurant menus, as well as digital texts on computers, TV screens, and smartphones.

The facial recognition system identifies faces of people who were previously recorded in the memory of the device. When the user is in the field of vision of MyEye, the name of that user will be announced. MyEye has two parts: an audio piece and small camera that can be attached to almost any frame of eyeglasses, and a base unit with the same size as a smartphone, which functions as the “brain” of MyEye. The user can “read” the text by clicking on the “trigger button” of MyEye or by simply pointing to the text using their fingers. This is the only assistive technology tool available that can be activated by a pointing gesture. A camera receives the information, then the base unit will process it, and finally, the device will discreetly relay the information to the user through audio.

Using MyEye, people who are having difficulties reading can now read books or their own email at any time and any place. The facial recognition technology enables visually impaired and blind people to be more comfortable and active in social settings. With this technology, they will know when a member of the family, a friend, or a coworker is nearby because MyEye can identify them whenever they are in view. The device offers a certain degree of independence that the user has lost or never had.

Gurevich also said that the reception of the product has been great. Based on feedback from users, the functionality of OrCam MyEye is extremely empowering. Many users said that the device has changed their life. Some of the positive feedback the company has received includes a mother who can now read a book to her children, an elderly person who can now read their mail without depending on others, and a veteran who can work more efficiently in their workplace. According to Gurevich, these inspiring stories serve as a motivation for the company to work harder to improve their product and reach more people.

Currently, there are thousands of satisfied users in the United States and all over the world. MyEye is also available in multiple languages, such as English, German, Italian, Spanish, Hebrew, and French.

AI is advancing rapidly and many people use the technology to deal with various challenges. Along with other technological inventions, AI is likely to provide major benefits to the society. OrCam is focused on accomplishing more success in creating more solutions that can help people lead a more productive life, including those with disability and other special needs.

AI Inventions for Accessibility for People with Sensory or Physical Impairments

One of the unique characteristics of cognitive systems and artificial intelligence is that they can engage with human beings and the rest of the world in human-like ways. The breakthroughs in machine learning are quickly improving the ability of cognitive systems to hear, see, and interact with humans through the use of gesture and natural language. This process also makes them more capable of supporting the aging population and people with disabilities.

Waymo — Google’s Self-driving Car

This car can serve as a means of safe and independent transportation for the visually impaired. Moreover, some people with mental and physical handicaps prevent them from driving a motor vehicle, so this type of car can also benefit them.

The self-driving car project from Google was originally led by Sebastian Thrun, co-inventor of Google Street View, former professor at Stanford University, and former director of SAIL or Stanford Artificial Intelligence Laboratory. Thrun and a team of engineers at Google created a robotic vehicle named Stanley that drives itself using a combination of artificial intelligence and information from Google Street view. It gathers information through numerous cameras and sensors mounted on the car.

In May 2014, Google created a new prototype of its self-driving vehicle. It had no steering wheel, brake pedal, gas pedal, and was 100% autonomous. In December of the same year, Google unveiled a fully functioning prototype of the vehicle.

The tests for this self-driving car began in 2015. It became the world’s first ever fully driverless vehicle ride on public roads with Steve Mahan, a blind man, as a sole passenger. Mahan said that 95% of his vision is gone and he is well past legally blind. This ride happened in Austin, Texas and was provided by Waymo’s Principal Engineer, Nathaniel Fairfield. The car did not have a test driver and was not accompanied by police escort.

The unit changed its name to Waymo and became a separate division in December 2016. The name Waymo came from the mission of the company: “a new way forward in mobility.”

Google’s Waymo is just one of the self-driving cars using AI technology that people with special needs can benefit from. They can help visually-impaired people and others with disabilities to navigate the streets. AI can determine the location of the passenger and take them safely to where they need to be.

DynaVox EyeMax System

The Dynavox EyeMax system provides stroke victims, people with cerebral palsy and paralysis, and others with limited or no dexterities (fine motor skills) the ability to use their eyes to participate in spoken communication. It uses a sophisticated eye tracking system so that users can interact through an on-screen keyboard to allow them to input words. Using the text-to-speech mechanism of the EyeMax, these words are translated into spoken text.

Apart from the on-screen keyboard, the device also has language software called InterACCt that provides so many pre-defined words and phrases. These can be picked from lists, pictures, and scenes, which make EyeMax accessible to mentally disabled people and young children who may not be able to comprehend with written language.

Artificial Intelligence for People with Autism Spectrum Disorder

Autism Spectrum Disorder or ASD is a variety of conditions characterized as neurodevelopmental disorders. People diagnosed with ASD exhibit two types of symptoms: difficulties in language, communication, and social skills and repetitive, restricted patterns of behavior, activities, or interests.

Children with ASD can find it extremely challenging to focus on their learning material and interact with others. However, with many years of research, AI tools appear to be helpful for them to overcome these challenges.

TecO — AI Robot for ASD

The most common problems for children with ASD are expressing their emotions and keeping an eye contact. It is also difficult to establish a relationship with them, which makes it the biggest barrier to treating their symptoms. To deal with these problems, researchers at Tecnologico de Monterrey in Nuevo Leon, Mexico have created a robot with artificial intelligence called TecO. It stands at 50 cm and its face and body resemble a cartoon bear. TecO records signals from the child and then translates this information, which will be evaluated later on by a psychologist or neurologist.

The robot aims to make the child notice its presence and make eye contact without increasing the anxiety level of the child. This robot is emotionless, static, and predictable, so an autistic child will find it easier to establish a relationship with it. TecO observes the child and if the child begins to lose interest, the robot can move or make a sound to get their attention back. There are cameras in the robot that record how many times the child makes eye contact and also quantitatively measure the development of the child. According to researchers, the improvements can be seen in just a few months.

Bandit, a similar robot, can move and alter its motion to respond to the behavior of the child. Another one named Darwin-OP2 can dance and play soccer. Both of these AI robots are in development. These robots will enable to record every session with the autistic child and track their progress. Robots are not famous for being friendly, but there is growing evidence that children with ASD respond more naturally to robots than they do to humans as robotic machines are more predictable compared to people.

In the future, these humanoid robots are expected to function as both playmates and therapists and help children with ASD to become more involved in social activities. Aside from that, robotic machines will open the possibility of constant data collection, which is necessary to track the development of the child, and repetition of social engagements, which is relevant to ensure learned social skills will be long-lasting.

Autism Glass

Another most common challenge that autistic people face is identifying social cues. Because of this, a graduate student from Stanford University named Catalin Voss is building a device using Google Glass (smart glasses) that is equipped with AI software to provide children with ASD the access to learn facial emotions of people. Voss named the invention Autism Glass and won $15,000 in a contest in 2016 entitled Lemelson-MIT Student Prize under the category “Cure It!” The contest is open to top undergraduate and graduate student inventors in health technology.

When a person wears the Autism Glass, the AI software will read the facial expression of the one in front of them. There is a heads-up display in every pair of these smart glasses that projects the suitable emotion under the individual’s face. The wearer just has to read the emotion to identify how someone else feels.

Voss is originally from Heidelberg, Germany. Before coming to the US at the age of 15, he had already invented and sold many apps. When he was a freshman at Stanford, he worked together with his classmate Jonathan Yan to create Sension, a facial-recognition technology, which also became a part of the Autism Glass.

At present, Voss has paired the glasses together with a smartphone app to record the emotions and responses that the child sees. A therapist or parent can later on go over the recording with the child to evaluate it and help enhance their emotional intelligence. Autism Glass allows a person to learn how to identify emotions and attempt to act accordingly. This tool aims to lessen the level of anxiety that people with ASD typically experience when they interact socially.

According to Scientific American, Voss wants the future versions of the AI software to be smart enough that it could be familiar with a wider variety of cues.

Voss initially launched Sension in 2013 and the biggest challenge he had to face is that children who use Autism Glass only learn to comprehend, but do not learn how to respond. Because of this, therapists or parents needed to intervene. Eventually, Voss was able to resolve this by providing children a sample conversation that they can learn from.

This is an ongoing issue other people in the same field are trying to resolve. Ned Sahin, the founder of Brain Power LLC and cognitive scientist has designed his own AI software based on Google Glass to improve the social skills of people with ASD.

If the trend continues, autism therapy in the future can involve something as simple as providing children with smart glasses to look through the world.

Autism spectrum disorder can be linked to intellectual disability. Nevertheless, approximately 40% of autistic children have average or above average intellectual capacities. Some people with ASD do extremely well in math, visual skills, and music. It is therefore important to know the skills these children have, to help them develop these skills further, and give them the chance to live as independently as possible.

AI-Powered Apps for ASD

A company named Identifor has recently developed a couple of apps. One of them is Champion, a new AI-powered virtual assistant. This app has a virtual assistant named Abby, which assists autistic people to organize their life when their family or relatives cannot be with them. Abby uses AI to observe the habits of the user and keep their school, work, and social life on the right track.

The second app is Identifor games, which provide a variety of games and help recognize the skills of the players. The results of the games can be used to identify the interests and abilities of the children and function as the first step to focus their attention towards a more systematic progress.

AI tools offer promising benefits. If the abovementioned tools are further developed together with the creation of AI-powered tools, they can help deal with various symptoms of ASD and other educational needs. The data collected by these AI tools during the sessions with children will present the scientific community with huge amount of valuable data necessary in the research of autistic behaviors and development disorders.

When AI tools are used together with regular therapy, these tools can support parents, teachers, and psychologists in helping autistic children to become more comfortable in their environment — by expressing their emotions, interacting with the people around them, and looking for ways to discover the talents they possess.

Wearable AI Social Coach to Lessen Challenges of Autism Spectrum Disorder

In many cases, people call wearable devices as simply a gimmick. However, wearables have been proven to be useful in predicting illness, tracking many things, and even giving advice on how to stay healthy. Many inventors want to expand the function of wearables in helping people with their daily life. Wearables can now also be used to help people overcome their increasing social challenges. In 2017, a team of researchers from MIT built a new AI smart watch that aims to be a “social coach.”

This invention adds to the society’s growing awareness regarding ASD. It caters not only to treat the disorders, but also to create commercial products and social infrastructure for ASD and other social disorders. The device aims to help people who are having difficulty to make a conversation so that they can navigate it with ease. This wearable AI can provide assistance in real-time. The team conducted a study and the results suggest that the use of technology to identify the tone of conversation in real-time is almost within reach. It offers a promising benefit for people experiencing anxiety, symptoms of ASD, and other conditions.

Tuka Alhanai and Mohammad Ghassemi developed the system and co-wrote the related study. They have also created an algorithm called Maven, which matches people that have similar interests.

Alhanai said that after a conversation, the user can rewind it and look at their mood during the conversation. Ghassemi and Alhanai partnered with Samsung Strategy and Innovation Center to make the device possible. The work mostly focused on using both audio data and physical feedback to train the wearable AI to recognize and analyze when conversations deviate.

The participants of the study were asked to tell either a happy or sad story while the AI device, which is attached to an experimental Samsung Simband (a fitness tracker), analyzed and measured the physical responses of the wearer (such as blood pressure, temperature, and heart rate) and recorded text and audio transcripts to catch elements like pitch, vocabulary, and energy. With this data, the device identifies the tone of the conversation as well as the emotional state of the wearer and the people around with an accuracy of 83%.

The researchers created two algorithms to evaluate the data they collected from 31 conversations whether it is “happy or sad,” and to categorize and provide a “sentiment score” of positive, negative, or neutral for every five-second part of the conversation.

According to Alhanai, with their new methodology for providing information to shape the neural network of AI and organizing that information at various layers of the network turned out to be more effective compared to conventional methods that report to algorithms at the network base. For example, the system focuses on how the emotion in the text transcript was more abstract compared to the raw accelerometer data.

The wearable social coach is not yet commercially available, but Ghassemi and Alhanai said that their goal is to make the device compatible with more popular tech devices like the Apple Watch.

Humanizing Technology

Asperger’s syndrome and other autism spectrum disorders are neurological disorders that affect language, socialization, and sensory sensitivity.

People with ASD do not have the ability to interpret non-verbal cues and behaviors, such as body language, facial expression, and visual contact. Autistic people feel emotions in a different way than other people without ASD and have certain difficulty in understanding emotions and abstract concepts, which in turn become difficulties in social interaction and communication.

Autistic people and others who are socially-impaired can cope with their condition with the help of the AI social coach by assisting them in identifying social cues.

However, for other users, this wearable AI device may have a different effect.

The world is more connect now than ever and social interactions are becoming more and more digitized. On the surface, wearables and other smart devices promote and facilitate social interactions, but they may also have negative effect in social relationships.

Dependence is one of the issues that may possibly arise as the wearer may get into the habit of looking at the AI social coach whenever they interact with someone. However, a potential increase in AI social capabilities is the biggest benefit of this device.

Conclusion

Artificial intelligence technology has a lot to offer for accessibility for different kinds of disability and special needs. Many of these innovations come as miniature or handheld devices, facial and voice recognition services, mobility devices, and so on. A lot of inventors are determined to help people with special needs interact with these AI devices to improve their daily life.

AI-powered devices are becoming much easier to use, and this helps them reach wider audiences as more and more customers prefer products with accessible design, especially those with special needs. These devices are and allowing them to take control of their life and giving them the independence that they want.

Another importance of developing new kinds of AI accessibility tools is to prevent people with special needs to be left behind as the world is increasingly relying on communication through mobile devices and computers.