Advertisement

New accessibility tech on GAAD 2023: Apple, Google, Microsoft and more

For GAAD 2023, a round up of accessibility tech from Apple, Google, Microsoft, and more.

Video transcript

CHERLYNN LOW: This Thursday, May 18, is the 12th annual Global Accessibility Awareness Day, or GAAD, and, as has become customary in the last few years, major tech companies are taking the chance to share their accessibility-minded products. From Apple and Google to Webex and Adobe, the industry's biggest players have launched new features to make their products easier to use. Here is a quick roundup of this week's GAAD news.

[MUSIC PLAYING]

First up, Apple. The company had a huge set of updates to share, which makes sense since it typically announces most of its accessibility news at this time each year. The first of these is a new tool called Assistive Access, and what it is basically a simplified interface for iOS. This features larger icons, basically fewer things to distract and confuse anyone who may have a cognitive disability or just doesn't want to feel overwhelmed by the typical iOS interface.

It includes a customized experience for phone and FaceTime, which have been combined into a single calls app, as well as messages, camera, photos, and music. It uses high-contrast buttons and large-text labels, as well as tools that help trust the supporters tailor the experience for the individual they're supporting. For example, in messages, if you prefer communicating visually, you can select the option that turns on an emoji-only keyboard so your communications with people can consist entirely of emoji.

Or you can choose to only use video messages to communicate, and you can send recorded video messages through the Messages app. If icons and graphics are not your thing, you can choose to move away from the grid-based visual interface to a row-based text layout. You can also select which apps should be front and center on this simplified interface, whether it be photos, messages, calls, or any other third-party app. Of course, Apple's first-party apps are more customized to work with this layout more intuitively.

Apple has launched a new feature on iPhone, iPad, and Mac called Live Speech. This allows you to type what you want to say and have the machine read it out for you. This works during phone and FaceTime calls, as well as in in-person conversations. And you can also save commonly used phrases to chime in quickly.

Whether it's your most frequent coffee order, for example, or a catchphrase that you use frequently with your friends and family, you can have it set up so that with a touch of a button, that phrase is spoken out loud quickly. For people who are, you know, at risk of losing their ability to speak, whether you've got a recent diagnosis of ALS or other conditions that can progressively impact your voice, there is a new Apple feature, called Personal Voice, that can let you preserve your own speaking voice or a voice that sounds like you.

You can create your own personal voice by reading alongside randomized text problems for about 15 minutes or so on iPhone or iPad. Apple is also introducing a new feature called Point and Speak within the Detection Mode tool. If you might recall, this is a tool in Magnifier which allows people who are blind or have low vision to use your phone's camera to point at things and have things like doors and signs identified and read out to them.

With Point and Speak, users will now be able to point at things that have labels on them. For example, a microwave oven with labels on different buttons, if you start pointing at what's on the item, the words can be read out to you.

- Cook time. Pizza. Power level. Add 30 seconds.

So Point and Speak combines input from the camera app, the LiDAR scanner, and uses on-device machine learning to understand where you're pointing as you move your finger across a keypad. And it's built into the Magnifier app on iPhone and iPad.

Those are the major GAAD news for Apple this year, but there are still a slew of smaller updates that the company shared this week. For example, those who use made-for-iPhone hearing devices can now pair them directly to Mac and customize them for their hearing comfort. Voice control now gets phonetic suggestions made for text editing so that people who type with their voice, for example, can choose the right word out of several that might sound alike, like "peak of a mountain," "peek at a word," or "pique your interest."

Over to Google, where the company just had a whole developer conference last week, but this week has some accessibility-minded updates to share. In continuing with the theme of "AI everywhere," Google is introducing a new visual Q&A, question and answer, feature in the Lookout app. VQ&A is a feature in the image mode in Lookout, where you can, again, use your phone to identify things around you, but this time, with VQ&A, you can actually follow up with more questions.

Once it tells you what the image contains or what's happening within the image, you can actually ask follow-up questions such as, "is it a sunny day?" or, "what is that breed of dog?" and the system will answer it for you. Google told me this was the result of a collaboration between the accessibility team and the people at DeepMind, using some good AI here to generate answers from the pictures given.

So one of the big questions around tools like that for accessibility like captioning and image descriptions is that it's often hard to judge how much information to give. You don't want to overwhelm a visually impaired person by giving them too much description or detail in an image, but you also don't want to give them not enough. And with VQ&A, you allow the user to kind of determine how much information they want out of that situation.

Now, VQ&A won't be launching today. It is just an announcement this week. It will be available in the fall. Now, Google's already used AI for a lot of its assistive products such as Live Transcribe or Live Caption, where you can get, you know, subtitles for anything playing on your phone. The news this week is that Live Caption is expanding its availability to cover languages like French, German, and Italian later this year.

Finally, a tool in Maps currently allows people to flag or label or share information about whether a place is accessible. Being announced this week is the fact that that feature will soon be launching globally to everyone. The exact timing, we don't know. Just soon

Also this week, Adobe announced that it's going to use AI to generate some accessibility tags for its PDF documents. Now, if you were not aware, PDFs actually have built-in metadata about the structure of the document, like, what's a paragraph, what's a header, for example. These are built in there for assistive devices, like screen readers, to help people who are low-vision or blind jump through parts of the document.

This new feature that Adobe is rolling out, which will be available as an API or in Acrobat Pro and Reader, should make things simpler and therefore encourage more tags to be contained within PDF documents, making everything, hopefully, accessible for all. So Adobe's PDF accessibility Auto-Tag API will automate this tagging process, and it is based on its Sensei-powered software that will also indicate the correct reading order for assistive technology.

The more intriguing component of this is that, Adobe says, this AI can quickly go through stockpiles of old documents that need or lack the proper structure right now. Speaking of stockpiles of old documents, the company is also launching a PDF accessibility checker, which should enable large organizations to quickly and efficiently evaluate the accessibility of existing PDFs at scale.

Now, we live in a world where video-conferencing is a integral, large, inescapable parts of our lives, but it does leave some people with disabilities finding it hard to communicate with their teammates. For example, people with different speech impediments will have a hard time being understood on calls or video calls.

This week, Cisco, the parent company of Webex videoconferencing tools, has announced that it is teaming up with Voiceitt, which is a speech-recognition company, to use Voiceitt's technology in Webex meetings. Now, Voiceitt has been working to identify and help people with different speech impairments have their voices recognized by the voice assistants that are prevalent everywhere today.

With this new update, Webex will be able to use Voiceitt's AI to establish and transcribe what people with non-standard speech are saying on calls, and it builds on Webex's existing live translation feature. So Voiceitt's AI works by using its machine learning technology to familiarize itself with a person's speech patterns to better understand what they want to communicate.

With the partnership, Webex meetings will have a chat bar pop-up with the live inscriptions. Cisco says Voiceitt should be available for download from the Webex App Hub for meetings starting in June and plans to make it available across the entire Webex meetings platform by the end of the year.

Samsung also had a small accessibility-minded update for us by way of the Galaxy Buds 2 Pro. Coming out in the next few weeks is an improvement to the ambient sound feature in those devices. After the software update goes out, users will find two additional levels on top of the original three for ambient sound volume settings.

With the additional levels of amplification, people now have up to five settings to choose from when choosing how much ambient noise to let in the ear. This can allow for people who are hard-of-hearing to more clearly hear their environments or fine-tune the settings for each ear. There will also be an adept ambient sound feature that allows for more custom-tuning to your ears the amount of clarity in those ambient sounds you let through.

Microsoft has traditionally been one of the leaders in the assistive technology space, specifically in the world of gaming, and today Xbox has also made some small announcements for GAAD 2023. In addition to expanding the accessibility support pages to give you more information on what's available in the assistive tech side of things across PC and console, there are also new accessibility settings on the Xbox app for PC.

This will allow users to disable background images and disable animations, which can help players reduce animations or visual components that can cause disruption, confusion, or irritation. Xbox is also announcing a slew of global partnerships intended to highlight some other accessibility features this week during GAAD 2023.

Many other companies are keen to jump on the GAAD 2023 train, but not all of them actually have news or new features to share. Netflix, for example, is releasing a sizzle reel of a collection of its accessibility-minded features and developments over the past year. Just in case you forgot, Netflix also did stuff for people with disabilities.

For example, over the last year, the company focused on increasing the total number of films and series that support audio descriptions and subtitles for the deaf and hard-of-hearing. It introduced the ability to customize subtitles shown on TV, as well as updated its "celebrating disability with dimensions" collection of series or films that feature characters or stories about people living with disabilities.

Netflix closes out its press release acknowledging that, while it's made great strides in accessibility, it acknowledges that there's always more work to be done. And honestly, I feel the same. Well, it is nice to see so many companies take the opportunity to release and highlight their accessibility-minded features. It's important to note that inclusive design should not and cannot be a once-in-a-year effort.

While generative AI might be a hot topic this year, and it can be tempting to use generative AI to come up with solutions to accessibility-minded issues, it's nice to see that companies aren't just throwing generative AI willy-nilly at an issue that requires a more thoughtful, user-first approach.

AI can be the answer for people who are stumped at writing an email or want help coming up with prompts for an essay, but it might not be the right solution for, say, a visually impaired person navigating the internet looking for correct, precise, accurate descriptions to images, or a person who's deaf or hard-of-hearing who just wants clear, coherent captions in the videos they're seeing.

And yes, it is a little bit of a marketing and PR game, but at the end of the day, some of these announcements will actually improve the lives of people with disabilities who need some of these products. So I call that a net win. For more coverage of the world of technology as well as where accessibility fits in, make sure you subscribe to Engadget. And until next time, breathe.

[MUSIC PLAYING]