Communication between deaf and hard-of-hearing individuals and healthcare practitioners can be difficult and the stakes are particularly high when vital medical information is miscommunicated or not understood. In an article in Medical Device Network, writer Abi Millar profiles SignLab Amsterdam, a University of Amsterdam research lab developing a machine translation tool to convert speech into sign language.
“If the doctor is sitting behind the desk and has their laptop, or maybe a phone or a tablet, they can type a question or a statement that they want to show to the patient,” said Lyke Esselink, a researcher at SignLab who was quoted in the article. “They press play, and then on the screen is an animation relaying that question or statement.” An animated avatar on the screen can sign basic phrases, though more complex sentences may be in the tool’s future.
“There are some applications already available in which people translate text sign language, but the sentences need to be recorded in their entirety,” said Esselink in the article. “Even in a closed domain, like a hospital, there are endless variations on sentences, and recording them one by one is really not feasible. So, what we’re trying to do is dynamically generate the sentences. This is largely unexplored territory.”
According to the article, the lab’s software converts the text into a “gloss,” or list of appropriate signs in text format. Code is then developed as an instructional manual for the avatar software to act appropriately, incorporating all the hand gestures, facial expressions, and orientation that make up a particular sign.
Though the tool is not yet at the point to be used in a clinical setting—and it’s not intended to replace interpreters—it could potentially be utilized as a communication tool in the future.
To read the story in its entirety, click here.
Source: Medical Device Network