Summary: The University of Surrey has launched SignGPT, a project aimed at using generative AI to bridge communication gaps between sign language and spoken languages.

Takeaways:

  1. SignGPT is a five-year AI project aimed at automating translations between sign language and spoken language, offering equal access to information for the Deaf community.
  2. The project involves a unique collaboration between top universities, Deaf linguists, and the Deaf community, ensuring technology is co-created with lived experience and linguistic expertise.
  3. The initiative will produce the world’s largest sign language dataset and open-source tools to enhance communication, set new standards for inclusivity, and promote ethical AI development.

A large-language model (LLM) built to meet the needs of the Deaf community, translating between signed and spoken language, is the aim of a new project led by the University of Surrey.  

SignGPT: AI to Improve Communication for the Deaf and Hard of Hearing 

SignGPT: Building Generative Predictive Transformers for Sign Language has been awarded £8.45 million (approximately $10.4 million) from the UK Engineering & Physical Sciences Research Council. The five-year project will build tools to allow spoken language to be automatically translated into photo-realistic sign language and video of sign language to be translated into spoken language – a complex translation problem that is yet to be solved. 

Surrey will work alongside the University of Oxford, the Deafness Cognition, the Language Research Centre at University College London, key Deaf stakeholders, and the Deaf community. 

“Large language models such as those behind ChatGPT and Gemini are transforming many aspects of our personal and working lives – and that transformation is happening at a blistering pace. Our project, SignGPT, is not about replacing humans, but it is about ensuring the Deaf community is not left behind in this revolution,” says  Professor Richard Bowden, Principal Investigator of the project from the University of Surrey’s Institute for People-Centred AI. “By creating technology that serves the community, we’re enabling equal access to information, working towards seamless communication between the Deaf and hearing world, and demonstrating that AI can be a tool for inclusivity and empowerment. SignGPT isn’t just about accessibility for Deaf people – it’s about setting a standard for how innovation can address inequities, strengthen human connection, and build a more inclusive society. In a world shaped by rapid technological change, projects like this show that AI’s potential is greatest when it uplifts everyone.” 

Accessibility for Those Who Communicate through Sign Language

“This project is a unique collaboration between vision scientists and sign linguists with Deaf and hearing researchers working together towards our common goals,” says Professor Bencie Woll, sign linguist, co-investigator of the project, and founder of the Deafness Cognition and Language Research Centre at UCL.   

Globally, there are around 70 million Deaf or hard-of-hearing individuals, many of whom use sign language as their primary form of communication. For many, written/spoken languages serve as a second or third language, and proficiency in these languages can vary. There is no universal sign language: sign languages are natural human languages created over centuries by Deaf communities and are not derived from spoken languages. 

Their underlying rules and structures remain a rich area of linguistic study. Each sign language has its own unique grammar and lexicon, relying on both manual gestures (hands) and non-manual expressions (body and face), along with spatial elements, to convey meaning. 

“I am pleased that this important grant will empower the Deaf community to have further equal access by harnessing AI and large language models,” says Mark Wheatley, CEO of the Royal Association for Deaf People (RAD). “We will ensure that the University of Surrey, Oxford University, and the Deafness Cognition and Language Research Centre at UCL, alongside Deaf-led stakeholders such as RAD, take a people-centred approach to ensuring ethical responsibility and the accuracy of translations so that we, the Deaf community, can use them for everyday purposes.” 

“So much work in sign language technology is undertaken by researchers with no understanding of how sign languages work, nor any lived experience of deafness themselves,” says Professor Kearsy Cormier, one of the co-investigators on the project from University College London. “This project will allow real co-creation/co-development of this technology with Deaf and hearing researchers in linguistics and deaf studies working alongside computer vision specialists – with each group learning from each other – and, importantly, building capacity amongst Deaf researchers so they may lead this field in the future.” 

More About SignGPT

SignGPT’s research team will produce the largest sign language dataset in the world and use it to build a sign language LLM that can provide the breadth of application to the Deaf community that current LLMs provide for written/spoken languages. In doing so, the project will also generate tools for data annotation that will be released for use by the wider community. 

The project already has Deaf members within both the research team and wider partners, but it is hoping to recruit more staff for whom British Sign Language is their primary language.  

The challenge of automatically translating between sign languages and spoken languages is highly complex and remains unsolved. SignGPT will produce open-source toolkits for linguistic use, web-based demonstrations for accessible knowledge exchange, and run outreach programs alongside collaborative workshops.

Featured image: Professor Richard Bowden. Photo: University of Surrey