By Brian Leahy, Senior Visual Designer
As I write this, at the start of the year 2025, it feels fair to presume that most people in the marketing and tech industries have encountered AI-generated visual content, and probably even experimented with a few visual AI tools themselves.
In the spirit of keeping up with advances in my field (graphic design), I have tried a range of these myself, including Dall-E, Adobe Firefly, and the other AI-powered tools that Adobe is building into their software, like Photoshop’s ‘Generative Fill’ tool, and Illustrator’s ‘Generate Vectors’ (but only the beta version of that one, at the time of this writing).
I’d say the current visual AI tools fall into 1 of 2 camps:
Those trained on material that was sourced with the creators’ consent and/or compensation, such as Adobe (presuming their official pronouncement about their AI tools is indeed truthful. I don’t have any evidence to the contrary, but then again, it’s increasingly common these days for companies to say one thing publicly, while doing the opposite behind closed doors).
Those that were trained on material without the artist’s consent and/or compensation (Dall-E being a prominent example of this, despite their reassurances that they are developing ways for creators to ‘opt out’. And don’t get me started on the ethics of placing this onus on the creator of the work one intends to profit from).
Whether you believe that this new wave of AI visual tech represents the largest, government-permitted theft of intellectual property in history, or that it complies with existing laws and will benefit humanity more than it harms, or you fall somewhere in between those two camps, you likely already have a position on this topic. This article is not intended to alter it.
Rather, this article presumes that these tools will only continue to get better. In which case, at some point in the near future, AI-generated visual content will 1) represent a significant portion of the visual content we consume, and 2) become increasingly difficult to distinguish from non-AI-generated content.
Such a point may seem obvious, but I think we need to keep repeating such things so we don’t forget that the above conditions present a very real danger of misinformation.
Some quick examples that might fall under this broad umbrella term ‘misinformation’:
Incorrect representation of what occurred at an event, i.e. rewriting the truth
Using false media to sway public opinion on a subject, or gain political power
Profiting off of someone else’s work
We can all agree those things sound…not good. But what can be done about it?
A Promising Solution
If you happened to read that Adobe statement about their AI training that I linked earlier, you might have noticed that they mention they are a founding member of the Content Authenticity Initiative, or CAI.
(Curiously, on the CAI’s own website, they describe Adobe as the founder, instead of ‘a founding member’. Perhaps this incongruity is unintentional, but I note it here because I think we should be treating all companies’ statements on the subject of AI with healthy skepticism and cross-checking. Perhaps Adobe’s choice of words is merely meant to foster trust in the CAI as an already-established entity? I could see that as being the case, as I don’t suspect nefarious goals behind Adobe’s moves to spearhead this cause – it makes sense for their product, brand identity and reputation as an industry leader.)
Aside from having an acronym that is comedically easy to mistype as ‘CIA’, the CAI defines itself as such:
The Content Authenticity Initiative (CAI) is a cross-industry community of major media and technology companies, civil society, and many others. It was founded by Adobe in 2019, and Adobe continues to lead it. The CAI develops open-source tools for verifiably recording the provenance of any digital media, including content made with generative AI. The community's mission is to support broad adoption, making content authenticity and transparency scalable and accessible.
Their vision sounds like a promising solution to the dangers of misinformation in modern visual media. By embedding content credentials in our digital media, we can document when, how, and by whom content is created, as well as note any future editors and revisions. The CAI aptly likens this information to a nutrition label on food packaging, in the sense that it allows us to evaluate what we are consuming and then make our own decisions about how we use it.
In order for this vision to be successful, however, there are some crucial elements that need to be solved for:
These content credentials need to be easy for the general public to access and understand.
When revisions occur, it needs to be easy to update the content credentials.
Such updates also need to be compulsory in some way, i.e. either people are incentivized to make them, and to do so truthfully, or if the new credentials are recorded automatically, that process must be accurate.
These credentials need to be perceived as trustworthy by the public, in order for the public to gain value from them.
The credentials need security measures to ensure they are not entered or altered falsely, or with malicious intent.
It’s a substantial list to address, but the CAI’s proposed strategy shows potential. Their stated focus is to explore and develop the open-source tools that people use to create and maintain their content credentials, while simultaneously helping to promote the community and movement necessary to support widespread adoption and use of these credentials.
That still leaves nitty gritty details to be sorted out, such as what exactly content credentials should entail. The CAI has wisely sought the collaboration of many big players in the digital world and is working in tandem with The Coalition for Content Provenance and Authenticity (C2PA).
The C2PA defines itself as such:
“A formal coalition dedicated to addressing the prevalence of misleading information online through the development of technical standards for certifying the source and history (or provenance) of media content. It’s a mutually governed standards development organization (SDO) under the structure of the Linux Foundation’s Joint Development Foundation, formed through an alliance between Adobe, Arm, Intel, Microsoft, and Truepic.”
Sounds pretty similar to the CAI, right? Fortunately, the CAI’s FAQ page explains the difference. My understanding is that the C2PA is focused on the establishment, definition, and dissemination of content credential standards, whereas the CAI’s purpose is to build and maintain the tools that people will use to implement the credentials.
Encouragingly, the C2PA also states that they “unify the efforts of the Adobe-led CAI, which focuses on systems to provide context and history for digital media, and Project Origin, a Microsoft and BBC-led initiative that tackles disinformation in the digital news ecosystem.”
I can see the value in having a coalition to coordinate all these groups and directions, and I like that the coalition widens the scope of voices and companies being included in this historically important movement. According to the CAI’s site, collaborators so far include the Associated Press, BBC, Microsoft, The New York Times Co., Reuters, Leica, Nikon, Canon, Pixelstream, Truepic, and Qualcomm. Not a bad start!
Considerations
I mentioned earlier that content credentials will provide the ‘who’, ‘what’, and ‘how’ of a piece of content. But what details do we need to know about a file, exactly, in order to convey those things?
As you might imagine, the vast variety of content out there means that different media will probably require different credentials. Envision what happens when you click ‘Get info’ or ‘Details’ on a file. The window that pops up probably lists things like ‘file type’, ‘Date created’ or ‘Date modified’, etc. and you can typically customize which details it is displaying, and which you don’t need/want to display. Now, consider how that concept might play out in practice with content credentials:
Thought experiment #1: If your file is, say, an Adobe Illustrator document, you’re probably not wondering how it was created (unless you open it up and find elements that clearly came from outside the program, like a photo, of course). But if your file is a PDF, it’s immediately less clear how the content inside it was made.
Thought experiment #2: Let’s compare an AI-generated ‘photograph’ of a political figure, that has not been edited since its creation, vs. a documentary-style video that was built with authentic video footage, but quietly edited later by someone other than the original creator, with no mention that it was edited in the video credits.
For the AI-generated photo, it’d be great to know what generator was used to create it, as well as who created it, so I can determine possible motivations behind it.
The documentary video scenario is a bit more complex. I certainly want to know that the video was edited at a later date – but in this era of file sharing and streaming, a file category like ‘Date modified’ cannot always be treated as relevant. (If you’ve ever copied a file from one location to another, you know how quickly well-meaning categories like that can become useless). So how do content credential tools establish a trait like this? Perhaps we can compare the software used on the ‘Last edited’ date, with the program used to create the original video. Maybe we also display the original creator’s name next to the editor’s name; if these two are different, I could research the second person and take a guess as to whether their edits might be malicious or in line with the creator’s vision.
Thought experiment #3: You’re handed a digital photo—a nature scene with a blue sky—and tasked with turning it into a poster, using the photo as the background. You determine that the composition would work much better if the blue sky continued further up the image than it actually did, at the time the photographer made the image. The photographer has no issue with you using Photoshop’s ‘Generative Fill’ to extend the sky, but you’d both like to ensure that future viewers of the file can determine this edit was made. If the content credentials simply noted ‘Edit made with Photoshop – AI use – Generative Fill’, that would be better than nothing, but it still leaves future viewers wondering which specific aspects of the photo were edited. Perhaps there’s a text field in the credentials, where creators can describe complex processes like this in more detail? How do you ensure they include the important details? Perhaps a ‘fill in the blanks’ style form.
As you can see, there’s a lot of variables to account for here. Fortunately, this seems like something that can be tested and refined over time.
You’ve Been Invited To Attend
Let us classify the credentials themselves as ‘solved for’, then, in the sense that: 1) most would agree they are a good idea, 2) the current entities involved seem to have a reasonable plan, and 3) the tools they’re developing are open-source and shared in good faith. What, then – if anything – is preventing companies and individuals from adopting them worldwide?
As I see it, the answer is so simple it’s almost a paradox.
The only way for content credentials to work, is for people to use them.
This dilemma is a lot like the modern social media landscape. The big social media platforms often make undesirable changes to their apps, as well as questionable decisions related to privacy, marketing practices, data collecting, and the suppression or distortion of specific topics and accounts. Alternative social media options debut and flop annually, and yet, millions upon millions of people continue to use the big platforms. Why? Because that’s where everyone else is. What’s the point of switching to a ‘better’ social media platform if no one you know is on there?
In the same sense, content credentials will only work if we commit to them. Once we all agree to implement and uphold the concept, we can work out the fine details – how exactly they’re implemented, who is governing them, how to ensure they’re trustworthy, etc. – from there.
With that in mind, I invite you to keep the conversation about content credentials going with those around you. Even if you’re not feeling quite ready to join the CAI yet, consider how you might incorporate the concept into your own workflows, and what benefits you might gain. While much remains to be determined about the future of content credentials, the one thing that’s certain is that we should not hesitate to make them a priority.
About the author
Bria Leahy is Senior Visual Designer at Clyde Golden, a lifecycle marketing agency.