Skip to content

Push for Digital Standard to Counter Deceptive AI Tools

As the battle against fake content intensifies, a coalition is championing a new digital standard to bring clarity to online authenticity, addressing concerns as AI-generated content blurs reality.

In the face of rising concerns over the dissemination of misleading information through AI-driven technologies, a collective of companies is rallying to promote content transparency in the online realm.

The ascent of generative artificial intelligence has intensified worries about the public's ability to distinguish between factual and fictional content. This issue has gained prominence with the approaching 2024 presidential race, as fears emerge regarding the potential dissemination of deceptive political content.

Generative AI refers to the utilization of artificial intelligence tools capable of generating diverse forms of content, spanning text, images, audio, and video, prompted by a simple input.

The advent of new tools capable of producing hyper-realistic content has paved the way for candidates and their supporters to propagate partisan messages. Instances such as images featuring fabricated depictions of President Joe Biden in a Republican Party advertisement and AI-generated voice impersonations of former President Donald Trump by a political group supporting Florida Gov. Ron DeSantis' White House aspirations underscore the concerns.

To address these challenges, a consortium of companies, operating under the banner of the Content Authenticity Initiative, is endeavoring to establish a digital standard. This standard aims to resurrect user trust in the content they encounter online.

Truepic's Mounir Ibrahim highlighted the significance of transparency and authenticity in combating the proliferation of misinformation. Truepic's camera technology appends verified content provenance details such as dates, times, and locations to content captured using their tool. This technology currently finds applications ranging from NGOs documenting war crimes to commercial partners, like insurance companies, verifying the authenticity of images depicting damage. However, Ibrahim envisions a potential application for 2024 candidates seeking to validate the credibility of their posted content.

Mounir Ibrahim noted, "Think about the way in which we make our decisions on who we vote for, what we believe: So much of it is coming from what we see or hear online."

Dana Rao, Adobe's chief trust officer and general counsel, underscored the urgency of this initiative, particularly in the context of governments' communication with citizens. He emphasized the surge in governmental online engagement through social media platforms and other digital audio and video content, making it critical to ensure authenticity and transparency.

The Content Authenticity Initiative's envisioned digital standard would facilitate the presentation of "content credentials," encapsulating the complete history of a content piece, including its capture and any alterations. The ultimate goal is to have these credentials accompany content across various online platforms, whether websites or social media outlets.

Rao emphasized, "The key part of what we're offering is this is a solution to let you prove it's true. And that means the people who are using content credentials, they're trying to tell you what happened. They want to be transparent."

The assertion is that users should have access to this information to make informed decisions about the credibility of the content they encounter.

Both Rao and Ibrahim acknowledged that while malicious actors might sidestep these standards, the hope is that creators will widely adopt them, allowing authenticated content to stand apart.

Adobe has engaged in constructive dialogues with social media platforms, although none have formally joined the Content Authenticity Initiative or agreed to enable users to display the new content credentials on their platforms.

Efforts to elicit comments from Meta (owner of Facebook and Instagram), TikTok, and X (formerly Twitter) are currently underway.

Hany Farid, a computer science professor at the University of California, Berkeley, specializing in digital forensics, emphasized that content credentials are a freely accessible open-source technology that companies can readily implement. He noted the potential erosion of information ecosystems due to the rise of generative AI, posing a substantial threat to information integrity.

Farid expressed concerns about the cascading effects of the manipulation of digital content, particularly in electoral contexts, potentially endangering democratic processes. However, he expressed optimism that ongoing discussions, not just with technology companies but also with lawmakers, will usher in comprehensive industry changes.

Farid stated, "I think our regulators are asking a lot of good questions, and they're having hearings, and we're having conversations and we're doing briefings and I think that's good. I think we have to now act on all of this."