Cover Image - Ok Google: How Are People With Down Syndrome Helping Improve Voice Assistants?

Ok Google: How Are People with Down Syndrome Helping Improve Voice Assistants?

Getting Voice Assistants to Understand Voice Patterns of Those with Down Syndrome

Voice assistants, such as Alexa and Siri, have become crucial to artificial intelligence. However, trying to dictate a command to a voice assistant and it not being able to understand or aid someone because of differences in speech is quite a bit of a hassle—this is a problem people with Down syndrome face on an everyday basis. Fortunately, help is on the way for voice assistants to be able to understand the speech patterns of people with Down syndrome.

Google is partnering with the Canadian Down Syndrome Society (CDSS) – a nonprofit organization providing information, advocacy, and education about Down syndrome – to collect voice samples from adults with this disability in order to program its algorithm to better decipher their unique speech patterns.

Speech Impairments in Those with Down Syndrome

According to the CDSS, speech is often altered in those with Down syndrome because of variances in their facial-skeletal and muscular systems, causing verbal apraxia. Fortunately, in a small pilot program, Google and CDSS found that there were enough similarities with those with Down syndrome to effectively train voice assistant technology to recognize their speech patterns.

“With the help of the Canadian Down Syndrome Society, we were able to sample a small group to test whether there were enough patterns in the speech of people with Down syndrome for our algorithm to learn and adapt,” said Julie Cattiau, product manager at Google. “It’s exciting to see the success of that test and move into the next phase of collecting voice samples that represent the vocal diversity of the community. The more people who participate, the more likely Google will eventually be able to eventually improve speech recognition for everyone.”

Goals and Mission of Project Understood

Participants involved in the mission of getting voice assistants to understand the voice patterns of individuals with Down syndrome, known as “Project Understood,” are asked to record themselves speaking 1,700 simple phrases, such as, “strawberry jam is sweet.” Google will then use this data to improve its speech recognition models.

Given the increasing role of technology in everyday life, the CDSS is hoping to make a significant impact by soliciting at least 500 voices of those with Down syndrome.

“For most people, voice technology simply makes life a little easier,” said Laura LaChance of the CDSS. “For people with Down syndrome, it has the potential for creating greater independence. From daily reminders to keeping contact with loved ones and accessing directions, voice technology can help facilitate infinite access to tools and learnings that could lead to enriched lives.”

Project Understood is associated with Google’s Project Euphonia, a broad effort announced earlier this year to train computers to better understand and transcribe the words of those with speech impairments.

 

Source: Disability Scoop