Accessibility is extremely important. Microsoft appears to understand that. The company announced that Microsoft Edge will now provide auto-generated alt text for images that do not include it. Auto-generated alt text helps users of assistive technology, such as screen readers discover the meaning or intent of images on the web.
Many people who are blind or low vision experience the web primarily through a screen reader; an assistive technology that reads the content of each page aloud. Screen readers depend on having image labels (alternative text or “alt text”) provided that allows them to describe visual content – like images and charts, so the user can understand the full context of the page.
Microsoft points out that alt text is critical to making the web accessible, yet it is often overlooked. According to Microsoft, their data suggests that more than half of the images processed by screen readers are missing alt text.
To make this easier on everyone, Microsoft Edge will now use automatic image descriptions. When a screen reader finds an image without a label, that image can be automatically processed by machine learning (ML) algorithms to describe the image in words or capture the text it contains. Microsoft notes that the algorithms are not perfect, and the quality of the descriptions will vary, but for users of screen readers, having some description for an image is often better than no context at all.
After the user has granted permission, Microsoft Edge will send unlabeled images to Azure Cognitive Services’ Computer Vision API for processing. The Vision API can analyze images and create descriptive summaries in 5 languages and recognize text inside of images in over 120 languages.
There are some exceptions. Certain image types will not be sent to the auto-image caption service, nor provided to the screen reader:
- Images that are marked as “decorative” by the web site author. Decorative images don’t contribute to the content or meaning of the web site.
- Images smaller than 50 x 50 pixels (icon size and smaller)
- Excessively large images
- Images categorized by the Vision API as pornographic in nature, gory, or sexually suggestive.
If you prefer to add the alt text yourself, you can do that instead of using Computer Vision API. There is a way to turn off the policy name AccessibilityImageLabelsEnabled feature.
Another really cool thing about this is all Microsoft Edge customers on Window, Mac and Linux can use Microsoft’s built in alt-text service. However, the feature is not currently accessible for Microsoft Edge on Android and iOS.
People who don’t use screen readers may not understand why it is so important to provide a description for images that you post on your website or on social media. It only takes a few seconds to write an informative description, and it will bring more context to the images read to a person who uses a screen reader.