With WWDC just a few weeks away, Apple on Tuesday unveiled a preview of a wide range of new accessibility software features for iPhone, iPad, and Mac. Features designed to make it easier to use Apple devices for people with cognitive impairments, vision, hearing, speech, and mobility impairments or at risk will be rolled out later in 2023, but the company has announced a link to Global Accessibility Awareness. The day that falls this year on Thursday, May 18th.
“Today, we’re excited to share incredible new features that build on our long history of making accessible technology,” said Tim Cook, “so everyone can create, connect, and do what they love.”
The first new feature aims to make iPhone and iPad easier to use for people with cognitive impairments. Assistive Access works at the individual application level, offering a streamlined or simplified interface to reduce cognitive demands. Apple calls this process “we[ing] design innovation to integrate apps and experiences with their core features.”
For example, the Phone and FaceTime apps have been reduced to a single Calls app designed for easier use, while Messages offers a video messaging feature and the ability to use an emoji-only keyboard. Similarly, there are “distilled” versions of the Camera, Photos, and Music apps. Apple hasn’t indicated whether third-party developers will be able to create assistive-access versions of their apps, but it’s likely that they will be encouraged to do so.
Finally, Assistive Access offers the ability to customize your device’s interface at the OS level, choosing from a traditional grid-based layout or a row-based layout. It sounds like choosing the home screen view on the Apple Watch, which can display apps in either a honeycomb grid or an alphabetical list, which is easier for many of us to use.
Live Speech is a text-to-speech feature for those who cannot speak or have difficulty speaking. It can be used during face-to-face conversations if you have the device handy, but it can also allow iPhone, iPad, and Mac users to type and speak responses during phone calls and FaceTime calls.
This might seem like it might slow down conversations for those who can’t type fast, but Apple says the user will be able to save frequently used phrases for quick replies.
Associated with Live Speech, Personal Voice is for those who do not currently experience unacceptable speech difficulties but are at risk of experiencing them in the future. The idea is that you spend 15 minutes reading text clues aloud on your iPhone or iPad, which then uses that audio data and machine learning to create a digital voice that matches your own. Then, if speech becomes impossible for any reason in the future, you can use the Live Speech feature to make calls and send messages in a voice similar to your own. Apple assures us that the data will be kept private and secure to prevent the possibility of sound imitation.
These seem to us to be the top three announcements in today’s press release, but there are many smaller announcements worth mentioning.
For example, Magnifier discovery mode has received a new point-and-speak feature. This means that a visually impaired user can swipe the buttons on an electrical appliance and have the iPhone read the labels aloud.
Text size will be easier to adjust in Mac applications. Those who are sensitive to fast animations will be able to automatically pause GIFs in Messages and Safari. In addition, people who are deaf or hard of hearing will be able to directly connect their hearing aids certified as “Made for iPhone” to their Mac computers.
However, some ads go beyond the features of the software. Apple is expanding its SignTime service, which offers sign language interpreters for Apple Store customers and those who contact Apple Support, to more countries. Sessions will be held in Apple Stores to introduce customers to accessibility features, while selections of shows, movies, and series that focus on accessibility issues or created by people in the disability community will be featured on podcasts and the Apple TV app.