Google has announced Signgmama, a new artificial intelligence model (AI) that can translate sign language into an operative text. The model, which will be part of the GMMA series, is currently tested by the Mountain View technology giant, and is expected to be launched later this year. Similar to all other GEMMA models, SignGMma will also be an open source model, which is available to individuals and companies. It was first displayed during the keyword for Google I/O 2025, and is designed to help people who have difficulty speaking and hearing effectively to communicate even with those who do not understand sign language.
Signmentma can track hand movements and facial expressions
in mail On X (previously known as Twitter), Google DeepMind’s official arrest of the artificial intelligence model and some details about its release date. However, this is not the first time we have seen Signgemma. It was also briefly displayed at the Google I/O event by Gus Martin, Director of GEMMA Products in DeepMind.
We are pleased to announce from SignGEMA, our most capable model for translating sign language into an operative text. 🧏
This open model comes to the Gemma Model family later this year, opening new possibilities for comprehensive technology.
Share your notes and attention early … pic.twitter.com/nhl9g5y8ta
– Google DeepMind (@GoogledEPMIND) May 27, 2025
During the show, Martins highlighted that the artificial intelligence model is able to provide a text translation from the actual sign language, making communication face to face. The model has also been trained on data collections with different methods of signs, however, it performs better with American sign language (ASL) when translated into English.
According to To multi -language, as it is an open source model, signgmama can work without the need to connect to the Internet. This makes it suitable for use in areas with limited contact. It is said that it has been built on the Gemini Nano frame and uses the vision adapter to track and analyze hand movements, shapes and facial expressions. Besides its availability of developers, Google can integrate the model into current artificial intelligence tools, such as Gemini Live.
“Deepmind” called it “our most capable model to translate sign language into an operative text.” The Great Language Form, which focuses on access, is currently in the early testing stage, and the deployment of the technology giant Benefit To invite individuals to try it and make notes.