Author: Melanie Mudge
ICYMI, the International Press Telecommunications Council’s (IPTC) newest photo metadata standards went into effect in October 2021 and include two new properties: Alt Text (Accessibility) and Extended Description (Accessibility). This is HUGE news both for anyone who shares digital content on the internet and for people living with visual disabilities, but if you’re like most folks in 2021, you’ve never heard of the IPTC, photo metadata, or internet accessibility, and thus it all sounds confusing. Never fear! We at Scribely are here to help you understand the ins and outs of these new standards, as well as how they affect all of us, whether we realize it or not.
How Photo Metadata Works
Let’s start at the beginning. For anyone unfamiliar with the jargon, metadata is just a fancy way of referring to data about data, and, in the case of photo metadata, the data about what’s in an image. For example, every time you take a photo with your smartphone, tablet, or digital camera, it’s embedded with details about when and where it was taken (if location is enabled), what type of device it was taken with, the size of the file, and even things like the aperture, shutter speed, and ISO. And this data is a permanent part of the photo, meaning if you email or text the image to someone else, it travels with the photo so they can see these details, too.
Metadata for an image can be stored internally (i.e., embedded into the file as part of the file, as shown here) or externally (separate from the file in a database like a Digital Asset Management system—more on these below).
In addition, photo metadata typically covers three categories:
- Descriptive. Title, description, keywords—anything that describes what’s depicted in the asset.
- Rights. The name of the creator, copyright, licensing details, etc.
- Administrative. When and where it was created, usage guidelines, etc.
As you can see, the metadata that’s automatically included when we take photos with our smartphones is simply administrative because most of the images will only ever be for our personal use. (It’s worth mentioning that with iOS 14 and higher, it’s possible on iPhone to add captions to photos that then correlate to IPTC’s <title> attribute, but currently iPhones don’t support IPTC in its entirety.)
But the other two categories—descriptive and rights—are really important for assets used professionally. Details like copyright information and licensing/permissions can be included so that it’s clear who the owner is and how the asset is allowed to be used. But more importantly—and here’s where the new IPTC accessibility fields come in—descriptive properties like titles and alternative (alt) text can now also be added.
How Photo Metadata Makes the Internet a Better Place
Most (if not all) of us know what it’s like to shop online: type in your search, scroll through thumbnail after thumbnail until you like what you see, click on that image to get more information, look at the detailed product images, read the description and maybe some reviews, and eventually decide whether you want to purchase that item. But imagine trying to do all of that on a version of the internet that excludes all product images. How would that change your experience? Would you be able to get all the information you need to make a decision? How would you even know which product to click on in the first place?
This, essentially, is how blind and visually-impaired folks, as well as people with information processing differences like dyslexia, experience the internet. They rely on text-to-speech technology like screen readers to help them experience what’s on a webpage by listening to all of the text and image content as it is read out loud to them. But this technology is only as good as what’s on a page. If it contains lots of graphics, buttons, and images that by nature have no text, those items must be skipped over because there isn’t any text for the screen reader to read! Thus all the information that sighted people glean with their eyes is completely lost...UNLESS a website utilizes image description properties, which include Alt Text and Extended Description.
Though similar, the two have slightly different purposes. Alt Text is a short description (typically 250 characters or fewer) of the purpose and meaning of an image). As a bare minimum for accessibility, Alt Text should be added to all images no matter where or how they’re used. Extended Descriptions, then, have no character limit and can be utilized if the alt text and surrounding text on a page does not or cannot sufficiently describe the image (a great example of such images is infographics). Thus it is a continuation of the alt text, diving further into the details that convey the purpose and meaning of the image.
Thanks to the IPTC’s new standards, both properties can now be added to the HTML coding of a website. In addition, if the software or program a photo is added to supports IPTC, the properties can be embedded into an image file so they travel with the image wherever it goes. Thus, the image description properties will carry over to be added to the HTML coding of the next location that image appears. Let us illustrate.
The photo below is from the free image repository Unsplash.
Using ImageSnippets, we wrote and embedded an Alt Text (Accessibility) description for the photo, then using the handy “Get IPTC Photo Metadata Tool,” we were able to view that description simply by uploading the image, as shown in the screenshot below. Now, because it’s embedded in the file, if we then upload it to, say, the Scribely website, that alt text will be there, ready to be read anytime a person who utilizes a screen reader visits the page. As you might imagine, this is a real game-changer for folks living with disabilities!
Side note: It’s worth a brief mention that Alt Text is also a game-changer for SEO. Search engines like Google and Bing don’t actually see; they simply read code. So if an image-heavy site doesn’t use Alt Text, search engines can’t index or read those images, and therefore they won’t include those images and/or pages in search results. Check out Scribely’s guide for writing alt attributes for a more in-depth discussion of these ramifications.
What Adding Alt Text and Extended Descriptions to Metadata Standards Means for You
Besides increasing the accessibility of the Internet, adding Alt Text and Extended Descriptions to photo metadata standards means less work for you overall. We’ll explain.
Since the beginning of the Internet, accessibility has unfortunately been an afterthought. Most companies and brands didn’t consider it as they were making content; they simply churned out massive amounts of content as quickly as possible, then worried about accessibility after the fact (or, in some cases, not until lawsuits under the Americans with Disabilities Act forced them to care). Since accessibility was never part of the workflow, entire petabytes (1 petabyte = 1000 terabytes) of data were created inaccessible to large portions of the human population. So how do we make all this data accessible? By going back and retrofitting it—a daunting (and extremely costly) amount of work.
But whatever we create moving forward doesn’t have to be that way. It can be accessible from the very start. This is known as born accessible and can be applied to any digital content. For something to be born accessible, the content creator or publisher needs to include applicable accessibility features to their content as they publish. For images specifically, the photographer or photo agency who produces and publishes the image would provide a baseline alt text description (and extended description if necessary) in the metadata. Subsequently, whenever that image is licensed or used by a publisher/individual, the baseline alt text would be quickly and easily adapted for their different contexts. Thus, it starts a chain reaction that makes human-scaling image accessibility possible.
Boom. No more dumping small fortunes and thousands of hours of manpower into fixing this problem. A small investment into training and updating workflows at the beginning means your content is accessible, search-engine optimized, and scalable up front.
For example, if your graphic designer creates a logo, they simply add the Alt Text when they upload it to your Digital Asset Management (DAM) software and voila! The description is there for everyone always, because it’s now part of the metadata stored in your centralized data system/repository. From there, you can:
- Auto-populate the HTML <alt> attribute when you publish to your website or digital products;
- Export the image with image descriptions embedded in the image file; and
- Send the image and corresponding metadata to a Content Management System (CMS) integrated with your DAM.
Making These Standards Become Truly Standard
All of this sounds wonderful, right? But just because these properties are now part of the metadata standards doesn’t mean they will instantly be everywhere. In order for them to become universal, the companies who provide the software, websites, and DAMs we rely on need to adopt them (major kudos to Tandem Vault for being the first DAM to implement the new standard!). And that only happens when their users contact them asking for the feature to be added (hint hint).
Once that happens, we need to start utilizing them! Whether you’re a freelance photographer working by yourself or one of hundreds of employees at a large company, start adding Alt Text to any new assets you create or use (and encourage others to do so as well). And, even more importantly, start thinking about how to go back and add Alt Text to your entire library. As daunting as it sounds, it will not only help disabled people, it will also make your brand more visible!
If you’re interested in harnessing the power of alt text for your brand/company but not sure where to start, talk to the experts at Scribely about how they can help!