Workflow
What’s New in Google Accessibility | Episode 9 | American Sign Language
Google·2025-07-16 14:03

Accessibility Innovations - Google is releasing SignGemma, an open model for sign language understanding, focusing on American Sign Language (ASL) and English, with plans to translate other sign languages into spoken language text [1][2] - Android expands Gemini integration into TalkBack screen reader, providing AI-generated descriptions for images and the entire screen, enabling conversational questions and responses [4] - Expressive Captions on Android now capture the intensity and nuance of speech, including emphasis and sounds like whispering or yawning [5][6] - Pixel's Magnifier app introduces live search, highlighting matches on the screen and vibrating when something is found, aiding blind and low vision users [6][7] - Project Astra Visual interpreter, in collaboration with Aira, is being tested to provide real-time descriptions of surroundings for blind and low-vision users, supervised by live Aira agents [8][9][10] Chrome and Chromebook Updates - Chrome now supports Optical Character Recognition (OCR) for scanned PDFs, allowing screen readers to interact with them [11][12] - Chromebooks now offer the ability to turn off the touchpad and flash the screen for new notifications [12] - New Chromebook features cater to users with limited dexterity and/or tremors, including Bounce Keys, Slow Keys, and Mouse Keys [13] Workspace Enhancements - Workspace allows users to embed interactive Google Calendars into websites, with screen-reader compatibility, improved spacing, and responsive layout [14]