Apple’s new accessibility features, including Assistive Access, Live Speech, and more, will arrive later this year.
  • May 16, 2023
  • Thomas Waner
  • 0

Apple has announced a range of upcoming features aimed at enhancing cognitive, vision, and speech accessibility on its devices. These tools, set to be introduced on iPhone, iPad, and Mac later this year, are the result of Apple’s commitment to incorporating feedback from disabled communities and strengthening its position as a leader in mainstream tech accessibility.

One of the forthcoming features is Assistive Access, specifically designed for individuals with cognitive disabilities. This functionality streamlines the interface of iOS and iPadOS, with a particular focus on simplifying communication with loved ones, sharing photos, and enjoying music. Notably, the Phone and FaceTime apps are merged into a unified experience. Additionally, the design employs large icons, enhanced contrast, and clearer text labels to create a more user-friendly display. These visual elements can be customized based on individual preferences, and the chosen settings will carry over to any compatible app.

With Assistive Access on iPhone, users can choose between a more visual, grid-based layout for their Home Screen and apps, or a row-based layout for those who prefer text.
With Assistive Access on iPhone, users can choose between a more visual, grid-based layout for their Home Screen and apps, or a row-based layout for those who prefer text | Apple

As part of the existing Magnifier tool, Apple already enables blind and low-vision users to leverage their devices’ cameras for identifying nearby objects, people, or signs. Now, Apple introduces a feature called Point and Speak, which utilizes the device’s camera and LiDAR scanner to assist visually impaired individuals in interacting with physical objects that have multiple text labels. For example, when a low-vision user wants to heat up food in a microwave, Point, and Speak can differentiate between the “popcorn,” “pizza,” and “power level” buttons, reading the identified text aloud. Point and Speak will support various languages, including English, French, Italian, German, Spanish, Portuguese, Chinese, Cantonese, Korean, Japanese, and Ukrainian.

Among the notable features is Personal Voice, which generates an automated voice that resembles the user’s own voice rather than Siri’s. This feature is designed for individuals at risk of losing their ability to speak vocally due to conditions like ALS. To create a Personal Voice, users are required to spend approximately fifteen minutes clearly reading randomly selected text prompts into their device’s microphone. Using machine learning, the audio is then processed locally on the user’s iPhone, iPad, or Mac to generate a personalized voice. This functionality is reminiscent of Acapela’s “my own voice” service, which operates in conjunction with other assistive devices.

Thomas Waner

A writer interested in artificial intelligence fields with good experience in programming. He is currently working for us as a writer, manager, and reviewer, with a strong CV.
from India. You can contact him via e-mail: [email protected]

https://tcitnews.com/

Leave a Reply