Facebook improves image descriptions for people with visual impairment. To improve its image recognition, Facebook had an AI learn from Instagram photos and hashtags. The visually impaired should also benefit from this.
The social network Facebook has fed an AI with billions of public Instagram photos and the associated hashtags. From this, the AI ??learned to automatically recognize around ten times more concepts in the images than it was able to before, explained Facebook. Thanks to the improved image recognition, people with visual impairment could now receive more information about images.
According to Facebook, statements could also be made automatically about the spatial position and relative size of elements in a photo. For example, it could be seen which people are in the focus of a picture and which are distributed around the edge.
Benefits for screen reader users
Facebook users who use screen reader software as an alternative to graphical output could now have the automatically generated information read out to them via photos. In doing so, Facebook initially restricts itself to the information that the AI ??considers essential. The user should, however, receive more detailed information about the image if desired: For example, about how many elements the AI ??has recognized in the image, which elements are in which part of the image and into which categories the individual elements fall (e.g. people or activities ).
Instagram: Prepared photo collection
Before Facebook let its AI learn from Instagram images and the associated hashtags, neural networks were trained with images that were specially awarded by people for this purpose. That meant a lot of time and effort that could be saved by using photos marked with hashtags. After the blog post, Facebook’s photos from all regions of the world were included in the learning process and hashtags were translated into many different languages. More concepts such as activities, sights, different types of food and selfies could now be recognized.