Scribely’s Chief Product Officer, Erin Coleman, comes to Scribely from Google where she led cross-functional research investments impacting business strategy, product development and service design. She has worked with venture firms, financial services, and emerging technologies. Erin’s studies focused on writing as design, and she is certified in mindful business administration as well as product development and digital wellness. She lives in Maine and New York City.
Read on to learn why Erin values image descriptions and why she’s excited about the future.
What makes you passionate about image description work?
Image descriptions are an undervalued piece of data powering the capabilities of visual assets. Quality image descriptions allow you to see with words, and images can become natural language processing opportunities.
As a function, image descriptions reveal the image through words - language. By describing an image using text the image takes a different form; offering new utility and functionality.
The internet is becoming more visual. However, today there is a lack of image description data, like alt text, resulting in images not reaching their full digital potential and causing barriers for human and computer access. I'm passionate about creating workflows to change this.
Why are you excited about the future of image descriptions?
Descriptive metadata creates the capability for images and visual content to be effective in different ways, and meet different purposes, than in pixel or vector form. Image descriptions enable images to become a text based data input. At the text-level an image can be explained, indexed, and consumed through language processing, making the image interpretable to humans, algorithms, and AI.
In a text-based format, an image can:
- Reach a wider audience through assistive technology
- Improve search accuracy and relevancy through content and context indexing
- Assist AI through natural language object detection and recognition
If images have well managed and high quality image descriptions:
- Assistive technology users gain utility, contextual meaning and content of images.
- Analyst teams can integrate descriptive datasets into their systems to analyze images to gain insights into trends, customer preferences, and market demand.
- Product development teams can use image descriptions to power AI design assistants for image-to-design concept generation.
- Personalized experiences can occur by utilizing the descriptions of images that a user has liked, shared, or saved and make recommendations based on visual preferences.
- Machine learning and AI become more capable when accessing image descriptions for image classification, object detection, and image generation.
What is one tip you'd share with a new client trying to tackle image description solutions, like alt text, for their company?
Set up a dedicated workflow. Image description creation and data management needs domain knowledge, operational support, and processes. For high quality outcomes, there needs to be an engaged team working on the end-to-end process to programmatically manage the workflow from creation to deployment while optimizing to improve quality.
How do you see new technologies, like AI, impacting image description metadata?
Tools like generative AI can create efficiencies and scale if properly managed within an image description workflow. When using generative AI for image description work you have to understand what part of the image description workflow the AI is impacting and manage that impact accordingly to ensure the AI is contributing to enabling a quality outcome.
When using AI in your image description work:
- Know how to train AI to write in your brand's voice, context, and style to meet the needs and expectations of your audience.
- If AI is writing -- make sure you have review and assessment as part of your workflow.
- If AI is aggregating images -- have an influence over how the AI is organizing and structuring your image data.
Find Erin on LinkedIn here.