In a surprising turn, AI-generated images depicting the Israel-Hamas conflict have emerged on Adobe's stock image platform. Sold alongside authentic images, these AI creations raise concerns as they are sometimes indistinguishable from real photographs. Some have been used by news publishers without clear labeling, leading to potential misinformation.
Adobe introduced AI-generated images to its stock platform last year, allowing contributors to earn a share of revenue. Images related to the Israel-Hamas conflict can be found on Adobe Stock, with nearly 1,000 results under "Israel-Palestine conflict."
While Adobe requires generative AI content to be labeled, the clarity of this labeling may be insufficient. The "Generated with AI" label appears when users click for more information, potentially leading to inadvertent use of AI-generated images without proper identification.
Reports indicate that AI-generated images have been used by news outlets without explicit acknowledgment of their AI origin. The authenticity of such images may be mistaken, contributing to misinformation surrounding the conflict.
Adobe Stock responded, emphasizing its requirement for proper labeling of generative AI content. The company believes transparency is crucial for customers to distinguish between conventionally sourced and AI-generated images.
The incident adds to growing concerns about AI-generated content's role in online disinformation. The use of realistic AI-generated images, especially in sensitive contexts like conflicts, raises ethical questions and emphasizes the need for clear identification.
This situation echoes previous instances, such as Amnesty International's use of fake AI-generated images for Colombian protests. As AI-created content gains sophistication, addressing its potential misuse becomes imperative.
In the era of online disinformation, the intersection of AI-generated content and real-world events demands heightened awareness and measures to preserve the integrity of information.
Read more about Adobe: