Tag Archives: accessibility

Apple Introduces New Features For Cognitive Accessibility



Apple today previewed software features for cognitive, vision, hearing, and mobility accessibility, along with innovative tools for individuals who are nonspeaking or at risk of losing their ability to speak. These updates draw on advances in hardware and software, include on-device machine learning to ensure user privacy, and expand on Apple’s long-standing commitment to making products for everyone.

Apple works in deep collaboration with community groups representing a broad spectrum of users with disabilities to develop accessibility features that make a real impact on people’s lives.

Coming later this year, users with cognitive disabilities can use iPhone and iPad with greater ease and independence with Assistive Access; nonspeaking individuals can type to speak during calls and conversations with Live Speech; and those at risk of losing their ability to speak can use Personal Voice to create a synthesized voice that sounds like them for connecting with family and friends.

For users who are blind or have low vision, Detection Mode in Magnifier offers Point and Speak, which identifies text users point toward and reads it out lot to help them interact with physical objects such as household appliances.

Assistive Access Supports Users with Cognitive Disabilities

Assistive Access uses innovations in design to distill apps and experiences to their essential features in order to lighten cognitive load. The feature reflects feedback from people with cognitive disabilities and their trusted supporters – focusing on the activities they enjoy – and that are foundational to iPhone and iPad: connecting with loved ones, capturing and enjoying photos, and listening to music.

Assistive Access includes a customized experience for Phone and FaceTime, which have been combined into a single Calls app, as well as Messages, Camera, Photos, and Music. The feature offers a distinct interface with high contrast buttons and large text labels, as well as tools to help trusted supporters tailor the experience for the individual they support.

For example for users who prefer communicating visually, Messages includes an emoji-only keyboard and the option to record a video message to share with loved ones. Users and trusted supporters can also chose between a more visual, grid-based layout for their Home Screen and apps, or a row-based layout for users who prefer text.

Live Speech and Personal Voice Advance Speech Accessibility

With Live Speech on iPhone, iPad, and Mac, users can type what they want to say to have it be spoken out loud during phone and FaceTime calls as in-person conversations with family, friends, and colleagues. Live Speech has been designed to support millions of people globally who are unable to speak or who have lost their speech over time.

For users at risk of losing their ability to speak – such as those with a recent diagnosis of ALS (amyotrophic lateral sclerosis) or other conditions that can progressively impact speaking ability – Personal Voice is a simple and secure way to create a voice that sounds like them.

Users can create a Personal Voice by reading along with a randomized set of text prompts to record 15 minutes of audio on iPhone or iPad. This speech accessibility features users on-device machine learning to keep users’ information private and secure, and integrates seamlessly with Live Speech so users can speak with their Personal Voice when connecting with loved ones.

Detection Mode in Magnifier Introduces Point and Speak for Users Who Are Blind or Have Low Vision

Point and Speak in Magnifier makes it easier for users with vision disabilities to interact with physical objects that have several text labels. For example, while using a household appliance – such as a microwave – Point and Speak combines input the LiDAR Scanner, and on-device machine learning to announce the text on each button as users move their finger across the keypad.

Point and Speak is built into the Magnifier app on iPhone and iPad, works great with VoiceOver, and can be used with other Magnifier features such as People Detection, Door Detection, and Image Descriptions to help users navigate their physical environment.

For users with low vision, Text Size is now easier to adjust across Mac apps such as Finder, Messages, Mail, Calendar, and Notes.

Personally, I think these features will be incredibly helpful for people who have cognitive disabilities, those who are unable to speak, and those who are blind or have low vision. Apple is doing a great job with accessibility!


Microsoft’s Adaptive Accessories Have A Release Date



Microsoft has announced that its range of Adaptive Accessories will be available to purchase starting October 25th in select markets, The Verge reported. The Adaptive Accessories were first announced in May and are designed to address common issues that can prevent people from getting the most out of their PC, especially if they have difficulty using a traditional mouse and keyboard.

According to The Verge, the wireless system includes a programmable button, an adaptive mouse, and the Microsoft Adaptive Hub, which connects up to four Microsoft Adaptive Buttons to as many as three devices.

The mouse is a small, square-shaped puck that can clip into a palm rest with a removable tail and thumb support. The mouse and button can be customized using a range of modular components, enabling users to find the best fit to suit their usability requirements. For example, the adaptive buttons let you add eight programmable inputs to your computer, allowing them to be used as a joystick or D-pad.

Back in May, Microsoft provided some explanation about the Microsoft adaptive accessories:

The new Microsoft adaptive accessories provide a highly adaptable, easy-to-use system. Each piece is designed in partnership with the disability community to empower people who may have difficulty using a traditional mouse and keyboard to create their ideal setup, increase productivity, and use their favorite apps more effectively. A traditional mouse and keyboard may pose obstacles for someone with limited mobility.

These adaptive accessories can perform a variety of functions, thereby alleviating a pain point for those who find it challenging to get the most out of their PC. The Microsoft adaptive accessories have three main ways that work best for your specific needs.

Right now, it is unclear what the price of the Microsoft Adaptive Accessories will be. However, The Verge reported that the mouse and button support 3D-printed accessories for a fully personalized experience, and both Business and Education customers will be able to 3D-print adaptive grips from Shapeways for the Microsoft Business Pen and Microsoft Classroom Pen 2. Community designers have previously made free printable files available for other accessibility accessories, such as the Xbox Adaptive Controller.

I think it is wonderful that Microsoft is making games more accessible to people who require adaptive tools in order to get the most out of the video games they play. This could literally be a “game changer” for people who struggle with keyboards and/or mouses, and who could benefit from using a joystick or D-pad instead. I say this as a person with disabilities that can cause me to stop playing a video game due to the pain in my hands.


Microsoft Edge Now Provides Auto-Generated Image Labels



Accessibility is extremely important. Microsoft appears to understand that. The company announced that Microsoft Edge will now provide auto-generated alt text for images that do not include it. Auto-generated alt text helps users of assistive technology, such as screen readers discover the meaning or intent of images on the web.

Many people who are blind or low vision experience the web primarily through a screen reader; an assistive technology that reads the content of each page aloud. Screen readers depend on having image labels (alternative text or “alt text”) provided that allows them to describe visual content – like images and charts, so the user can understand the full context of the page.

Microsoft points out that alt text is critical to making the web accessible, yet it is often overlooked. According to Microsoft, their data suggests that more than half of the images processed by screen readers are missing alt text.

To make this easier on everyone, Microsoft Edge will now use automatic image descriptions. When a screen reader finds an image without a label, that image can be automatically processed by machine learning (ML) algorithms to describe the image in words or capture the text it contains. Microsoft notes that the algorithms are not perfect, and the quality of the descriptions will vary, but for users of screen readers, having some description for an image is often better than no context at all.

After the user has granted permission, Microsoft Edge will send unlabeled images to Azure Cognitive Services’ Computer Vision API for processing. The Vision API can analyze images and create descriptive summaries in 5 languages and recognize text inside of images in over 120 languages.

There are some exceptions. Certain image types will not be sent to the auto-image caption service, nor provided to the screen reader:

  • Images that are marked as “decorative” by the web site author. Decorative images don’t contribute to the content or meaning of the web site.
  • Images smaller than 50 x 50 pixels (icon size and smaller)
  • Excessively large images
  • Images categorized by the Vision API as pornographic in nature, gory, or sexually suggestive.

If you prefer to add the alt text yourself, you can do that instead of using Computer Vision API. There is a way to turn off the policy name AccessibilityImageLabelsEnabled feature.

Another really cool thing about this is all Microsoft Edge customers on Window, Mac and Linux can use Microsoft’s built in alt-text service. However, the feature is not currently accessible for Microsoft Edge on Android and iOS.

People who don’t use screen readers may not understand why it is so important to provide a description for images that you post on your website or on social media. It only takes a few seconds to write an informative description, and it will bring more context to the images read to a person who uses a screen reader.


Twitter has Added Auto-Captions to Videos



Twitter has added video captions. It posted a tweet on December 14, 2021, that said: “ Where are video captions when you need them? They’re here now automatically on videos uploaded starting today.”

The same tweet explains that for Android and iOS, auto-captions will show on muted Tweet videos; keep them on when unmuted via your device’s accessibility settings. Those using Twitter on the web can use the “CC” button to turn auto-captions on or off.

TechCrunch reported that Twitter’s auto-captions will make videos more accessible for deaf and hard-of-hearing users. According to TechCrunch, auto-captions will be available on web, iOS, and Android in over 30 languages, including English, Spanish, Japanese, Arabic, Thai, Chinese, Hindi, and many more.

You can see the entire list of languages that can be used in auto-captions in a blog post on Twitter’s Help Center.

The same blog post points out that if your Tweets are protected, only your followers can view your videos in your Tweets. “Please note that your followers may download or re-share links to videos that you share in protected Tweets. Links to videos shared on Twitter are not protected. Anyone with the link will be able to view the content.” Twitter suggests that if you don’t want anyone to see your videos on Twitter, that you delete the Tweets containing those videos.

Twitter responded to a tweet in which someone asked if there will be an option to translate auto-captions to English. Twitter responded in a tweet: “Auto-captions will appear in the language of the device used to upload the video originally. Translation isn’t available just yet.”

Overall, auto-captions are a step in the right direction. It makes sense to add them to videos so that people who are deaf, or people who are hard-of-hearing can access videos. This feature comes after Twitter added the ability for people to add Alt text to photos and screenshots so that people who are blind can have their screenreader describe images to them.

It should be noted that videos that have already been posted to Twitter will not have auto-captions added to them. That is something Twitter needs to fix.


Facebook and Twitter are Making Images More Accessible



image by Redd Angelo from StockSnapIt has been said that adding an image to your post in social media is a good way to get more people to look at it. People who are blind or visually impaired might not be able to see those photos. Facebook and Twitter have made changes that are designed to make the images more accessible.

Facebook posted a blog that explains the change they are making. “With more than 39 million people who are blind, and over 246 million who have a severe visual impairment, many people feel excluded from the conversation around photos on Facebook. We want to build technology that helps the blind community experience Facebook the same way that others enjoy it.”

Facebook has introduced something called automatic alternative text. It generates a description of a photo using advancements in photo recognition technology. People who use screen readers on iOS devices will hear a list of items a photo may contain as they swipe past photos on Facebook. The change is a big one. Facebook states that before, the screen reader would describe a photo as “photo”. Now, the screen reader might say something like “image may contain three people, smiling, outdoors.”

This change was made possible due to Facebook’s object recognition technology. Facebook has launched automatic alt text on iOS screen readers set to English, and plans to add this functionality to other languages and platforms soon.

This follows a change made by Twitter that was designed to improve accessibility. As of March 29, 2016, people who use Twitter’s iOS and Android apps can add descriptions (also known as alternative text) to images in Tweets.

Users can enable that feature by using the compose image descriptions option in the Twitter app’s accessibility settings. The next time you add an image to a Tweet, each thumbnail in the composer will have an add description button. Tap it to see the image, and then add a description (of up to 140 characters). Doing so will help people who use screen readers to “see” your photo.