Every day, we encounter countless images online. The Internet houses more than 750 billion images, many with important roles to play. These images help us connect with friends and family, appraise items before buying them, experience sights from places we may never visit, and appreciate an aesthetically pleasing online experience.
Regrettably, access to online images is not universally equitable. People with unstable internet connections and those with visual impairments may find it challenging to enjoy these images, creating an accessibility issue.
One solution is alternative text: a text descriptor for various content types, including images. Web designers can incorporate alt text into HTML code to describe the appearance and function of an image. Assistive technologies, like screen readers, can then translate the alt text into speech or braille, making the image accessible.
While ideally, every online image would be labeled with alt text, the reality falls short. A 2022 study revealed that out of one million website homepages examined, 23.2 percent lacked alt text for images. Social media networks are even less accessible, with a 2019 study finding that only 0.1 percent of 1.09 million tweets included alt text.
The scarcity of alt text can be attributed to factors like convenience and cost. Writing alt text takes time, and paying people to do it is expensive. For instance, if it takes just 10 seconds to create alt text for one image, manually labeling 250 billion images on Facebook would demand almost 7 million hours of paid work.
However, there's a more efficient solution: Artificial Intelligence (AI). AI can significantly reduce the time and cost of generating alt text. The New York Times reports that both Microsoft and Google have developed AI features for alt text generation. Similarly, Facebook debuted its own AI-driven tool for alt-text generation in 2016.
AI-generated alt text, while a step forward in making online images more accessible, is not without its flaws. The accuracy and relevance of AI-generated descriptions are major challenges. AI often struggles to correctly label the objects in an image, let alone identify which of those objects are the most relevant to the image's purpose.
There are also concerns about bias in AI-generated alt text. For instance, in 2021, Facebook's algorithm incorrectly labeled a video featuring black men as "about Primates," a mistake that may have been rooted in biased data. The data sets used to train AI often underrepresent marginalized and minority groups, including women and people of color, leading to AI systems that are poorly equipped to accurately identify these groups in images.
AI-generated alt text serves as an important lesson about the limitations of AI. Although AI-generated alt text represents progress towards greater accessibility, it also has the potential to reinforce biases, especially when the AI is trained on data sets that lack diversity and representation. This dilemma serves as a crucial reminder for organizations venturing into AI implementation. It emphasizes the importance of being cognizant of these limitations and actively working towards developing AI solutions that are both accurate and free from biases.
Image by Patrick Fore on Unsplash
Synaptiq is an AI and data science consultancy based in Portland, Oregon. We collaborate with our clients to develop human-centered products and solutions. We uphold a strong commitment to ethics and innovation.
Contact us if you have a problem to solve, a process to refine, or a question to ask.
You can learn more about our story through our past projects, blog, or podcast.