As marketers use AI to develop images, understanding how to do so effectively (and legally) is a key challenge. To better understand how AI is changing image development and best practices, I sought insight from Grant Farhall, Chief Product Officer at Getty Images, a leading visual content creator and marketplace with three brands: Getty Images, iStock and Unsplash .
Getty Images
As background on Getty Images, Farhall points out that “Getty Images is in the business of visual storytelling. As a leading visual content creator and marketer, our brands deliver impactful visuals to help any brand, business or organization communicate more effectively with their target audience and inspire that audience to take action For nearly 30 years, we’ve covered global events … and focused important conversations on the images we capture around the world, enabling reporting fast and accurate visuals of the events that drive the news cycle. We also maintain one of the largest and best private archives in the world, filled with hundreds of millions of unique visual assets dating back to the dawn of photography.”
How AI is changing the development and use of images
Farhall suggests that compelling visual content is critical for marketers to connect brands with their audiences and that “Generative AI offers another opportunity to create those visuals in specific and appropriate contexts. However, the basic essence of the creative process remains unchanged; talented individuals, equipped with the right tools, are ultimately responsible for bringing new ideas to life. Generative AI is another instrument to help them channel their uniquely human creativity, like a new brush and canvas in their hands.
While there are changes, much actually remains the same. To connect with audiences, brands must navigate an increasingly visually cluttered landscape, and they must do so efficiently and at scale. Generative AI is one of many tools marketers can use to achieve this, and in some exciting and innovative ways, but these new opportunities come with potential challenges.”
The benefits offered by AI during image development
Farhall acknowledges that AI presents significant benefits. “Generative AI allows users to create images that are too difficult or impossible to capture with traditional means. And there have been wonderful, visually stunning examples of this. But there are also many examples of low-quality images that are derivatives of pre-existing ideas, including blatant copies of images created without AI. After all, a ‘quality’ image generated by AI is one that helps one reach and communicate with their audience and train for quality and ‘clean’ data that is fully permitted and without any risk of tampering the IP. Consumers don’t have to choose between creating quality AI visuals and legal certainty; they must seek both.
To that end, the advent of AI image generation requires us to think differently about how to sustain a thriving future for creators. AI is an exciting tool with a growing number of use cases, but the authenticity, diversity, creativity and quality of human-generated work are inimitable and necessary to support effective AI designs. In training our generative AI model, Getty Images ensures that creators who have contributed to the dataset are compensated for their work repeatedly. Those artificial intelligence services that have built their products through collected data put the rights of artists and IP holders at risk. The potential erosion of these rights has immediate and long-term implications for the wider creative economy; without these rights, we limit the opportunity for people to conceive new ideas and be properly rewarded for them.
Challenges presented by AI during image development
With the rise of AI, Farhall points out that “we now live in a world where we can’t always be sure whether the photos and videos we come across are real or not. This has serious implications for brands as they seek to build and sustain trust with their customers, especially where authenticity lies at the core of brand identity. Brands need to be careful about when and how they use AI and the level of transparency they provide around it. Artificial intelligence is not new, but it has never been more accessible, and everyone is struggling with the right way to use it.
Furthermore, image generators and other AI tools pose the greatest risk to businesses when they are not commercially secure and are not based on a clean underlying model. A safe commercial AI tool is one that allows marketers to use content created to freely market a product without any downstream legal risk. This means that a client has a license to use the image commercially and legal indemnity to protect them.
In the context of image generation, commercially safe AI tools are those that have not been trained on any copyrighted material or known similarity. Therefore, you cannot have legal problems for copyright infringement.
Most AI content generators can’t argue that. Many are trained on scraped data, which has been removed from the Internet without legal consent, or synthetic data, which is generated by other AI tools that may have been trained on copyrighted material. That’s why brands need to be especially careful and demand full transparency from any third-party AI companies they’re working with. Training using public domain datasets may sound okay on the surface, but users should seek full clarity on what these datasets were and the risks they pose. That said, training for a clean database is not the only factor in commercial security. Legal redress is central to the commercial security equation and should extend to the generation, downloading and use of content produced by a given medium. Otherwise, you still have holes in your proverbial ship.
When working with an AI vendor to use a third-party tool or AI-generated assets, businesses must hold those vendors to the highest standards. This requires a thorough assessment of its commercial security. Marketers should ask vendors how the data was trained and for what; usage rights; legal indemnity; and what would a user have to do with the image to lose legal coverage. With a commercially secure tool, they have the confidence to experiment with AI and unlock creativity without worrying about potential legal issues down the line.
At Getty Images, we’ve created an AI tool that exists in a world without known brands and likenesses—it’s completely impossible for users to infringe on intellectual property because the AI isn’t trained on any existing IP. We’ve built it only on our licensed, pre-shot creative library – what most people know as our “stock” library – and include indemnification and ongoing worldwide usage rights automatically for visuals of created or modified using our AI services. You don’t need to ask for the asset to be reviewed or cleaned because we know the footage is legally safe.”
Best practices for CMOs when using AI to develop images
Farhall suggests that the starting point is to use “commercially safe” AI tools, as they “can help brands create at a higher level in line with their unique needs, but are not a substitute for authentic, real life CMOs and their marketing teams need to determine if AI is the right tool for the job based on the audience they are trying to reach and the message they want to get across generated by AI, perhaps most importantly, when it isn’t – it can help marketers protect the reputation of their brands and maintain trust with consumers.
Where brands have built trusting relationships with their customers based on authenticity, there can be high risk in using AI visuals, or otherwise not being transparent about that use. Think how worried you would be if your audience found out you were using AI to communicate with them. Ask yourself to what extent you are comfortable being transparent about this. If you are too concerned about the possible reaction of your customers, it may be questionable whether AI is the right tool for that project.
Marketers must also use pre-photographed or personalized content where they must disclose the ages or identifying information of the models in the images; for example, alcohol brands have to confirm that all models used in a marketing campaign are over the age of 25, and you can’t confirm this in an AI-generated visual. Conversely, we can provide detailed model versions of all pre-shot creative footage, including age confirmation of the people in those images.
Boosting an AI generator takes time and creative thinking. Users and brands must ensure that their AI builds are entirely their own and that they will not be recycled into the training dataset for the tool they use or offered as creative assets for sale to other users. This keeps the integrity of their brand intact, as well as the integrity of their hard work.
Marketing teams should also consider transparency when it comes to results. According to Getty Images’ VisualGPS data, 87% of consumers believe brands should disclose whether an image was created by AI. However, there are currently no laws in place requiring this, leaving it up to brands and marketers to use their best judgment and practice responsible disclosure until regulations can match the pace of AI tool development.”
Overall, Farhall’s general advice is to be “extremely discerning about the AI tools you’re using and work with AI vendors who are completely transparent about their data and training processes, rights of use and legal indemnity. Brands must be able to create in superior ways that save them time, money and risk, and there must not be a trade-off between creativity and protection.”