An artificial intelligence (AI) system called “Watch, Attend, and Spell”—developed by researchers at Oxford University in collaboration with Google’s DeepMind AI unit—is able to lip read up to 50% of silent speech, according to a BBC article.
Researchers helped train the AI system with subtitled clips from BBC programs like “Breakfast,” “Newsnight,” and “Question Time,” and the system now has 17,500 words stored in its vocabulary, according to the article.
DeepMind is an AI company whose mission is to develop algorithms that can solve complex problems.
Source: BBC, Oxford University
Image: Vladgrin | Dreamstime.com
My relative is on a trach and cannot speak but he tries valiantly to mouth words to us.. It is so frustrating not to be able to read his lips and understand what he is communicating. This sounds like it may be a great solution.
Is there some way for us to try this on a computer to assist him?
I am a home health caregiver and my patient suffers fro Muscular Sclerosis to the point he has lost vocal speech and use of limb and han movement. Was looking for a computer program which could read his lips and put on the screen what he is attempting to communicate. Would,what you have described looks promising. Would be interested on trying something like what you apparently have developed.