Incorrect information comes from the tech industry.
Software giant Adobe is stuck promoting AI-generated images of the war between Israel and Hamas, as first discovered by Australian media outlet Crikey, a shocking and morally reprehensible example of a company profiting from the spread of incorrect information online.
A quick search on the company’s Adobe Stock website (a service that offers subscribed consumers a library of generic photographs and now also AI shots) of the “Israel-Palestine conflict” yields photorealistic photographs of explosions in full resolution. resemble the real carnage that has been unfolding lately in Gaza.
Another symbol shows “a mother and her son in a city destroyed the confrontation between Palestine and Israel,” a devastating framing generated entirely through AI. In fact, it’s a series of 33 symbols that show a similar composition.
And yet, it shows “destroyed and burned buildings in the city of Israel. “
All of these photos appear to have been submitted through Adobe Stock users and not generated through Adobe itself.
However, despite being technically classified as “AI-generated,” a requirement for all user-submitted works, some of those photographs are already circulating elsewhere on the web, as Crikey discovered, which can easily fool unsuspecting people. Members of the public.
A Google search for the opposite symbol confirms this, with a photorealistic AI symbol of a massive explosion used in several smaller publications.
After all, without painstakingly examining those photographs for telltale symptoms that they were AI-generated, such as misaligned windows or incompatible lights and shadows, they can easily pass for genuine photographs.
AI symbol turbines like OpenAI’s DALL-E, Stable Diffusion, and Midjourney have made great technological strides in the past 12 months. Gone are the days of apparent upheavals or horrific animal monstrosities.
As a result, AI-generated symbols enjoy massive visibility online. Earlier this year, Futurism discovered that the first symbol Googled by famed realist artist Edward Hopper’s call was a fake AI.
Instead of cautiously venturing into the world of generative AI, Adobe opted for generation with enthusiasm.
It caused a stir last month after its generative AI style called Firefly from beta, making it a comprehensive and seamlessly available feature of its widely used Photoshop. The company has even added a new annual bonus program for Adobe Stock contributors, actively incentivizing them to allow their paintings to be used to exercise the company’s AI style.
But that kind of fervor doesn’t benefit everyone. With the way Adobe chooses to market AI-generated symbols, the company is also actively undermining the work of photojournalists. In many ways, this is another example of AI technologies that threaten to drastically affect diminish the livelihoods of those who took the original symbols on which those AI symbol generation algorithms are trained in the first place.
This is a disturbing and ethically dubious example, given the wonderful danger war photographers face in documenting the harsh realities of human conflict.
Worse yet, while those types of photographs can easily lead to the spread of misinformation, they also actively undermine our acceptance as true by our position in the news we read every day.
“Once the line between reality and falsehood is eroded, everything will become false,” Wael Abd-Almageed, a professor at the University of Southern California’s School of Engineering, told the Washington Post last year. “We can’t possibly do anything. “
Learn more about AI images: Google’s most sensible result for ‘Johannes Vermeer’ is an AI imitation
Share this article
DISCLAIMER
Articles may include affiliate links that allow us to earn a percentage of profits from any purchases made.
Registration or use of this constitutes acceptance of our Terms of Use.
© Recurrent Ventures Inc. , All Rights Reserved.