What is real nowadays anyway?

June 9, 2023

Is anyone else feeling like it is increasingly difficult to tell the difference between what is real and what isn’t? The sudden unprecedented acceleration of artificial intelligence ("AI") has taken the world by storm. Developers have spent years studying programming languages, only for AI software to be able to generate perfect code in mere seconds. Journalists who have dedicated years to their craft are now reckoning with having their jobs being fully automated. Image generation is the most accessible it has ever been - you don't need to be a Photoshop whiz any more to create an image of Bill Clinton having tea with Pingu. With AI on the rise, it raises questions while perusing the web; who created this content I'm seeing? Who wrote this article? Is this a real photo? At a time when the authenticity of news is paramount to avoiding the spread of disinformation, the prospect of not knowing whether something is the real deal has quickly become a reality.

For years the internet has been awash with low quality websites designed to catch your eye, only for you to immediately realise you’ve been hoodwinked into opening a web page full of adverts. These are likely to link you to ‘must-see content’ viewable only via an impossibly difficult to navigate picture gallery, or an article that is chunked into so many sections that you feel overwhelmed from seeing so many adverts. As visitors become masters of spotting a disingenuous article, and as users generally become more savvy, content creators look for new ways to inflict their ‘click baitery’ on people.

Now the latest round of attention zapping skullduggery is hitting our screens thanks to a rise in cheap AI generated content pieces. In April 2023, NewsGuard identified 49 new websites spanning seven languages that appear to be entirely or mostly generated by artificial intelligence language models. These websites appear to be typical news websites, but are by design out to trick people into thinking they are reading information written by a human. These ‘news’ sites can often obfuscate ownership details, producing a high volume of content related to a variety of topics, including politics, health, entertainment, finance, and technology. How can users know who to trust in this unsettling rise in non-human authorship?

If you have experimented with AI text generation, then you’ll know just how easy it can be. In a few minutes you have what, for all intents and purposes, is an accurate passage of text. But dive deeper into the sources or citations and you realise how these models are only as good as the information they are modelled on. The reasons behind why you receive the specific outputs could potentially be based on something that has been misunderstood, misrepresented, or misquoted by the AI.

As well as AI generated text, photography is also being disrupted by AI tech. In the recent Sony World Photography Award, Eldagsen entered an AI generated image and won first prize, despite not using a camera to create the image. In May 2023, an image of an explosion outside the Pentagon went viral, only for the Department of Defence to explain that it was fake.

Eldagsen’s winning image

Along with all the recent developments in using AI for misinformation however, comes the flip side of the coin, using AI to help combat fake news. For years fact checkers have been using AI tools to help corroborate claims in the news and these tools are only getting more sophisticated with time. Earlier this year fact checkers in Nigeria used a suite of AI tools from UK-based organisation Full Fact in order to help identify and combat misinformation in the lead up to the presidential election. And after the AI-generated image of the pope in a puffer jacket went viral, Google announced they would be releasing tools to help better identify AI-generated imagery on their reverse image search.

As we run headlong into this new era of technology, AI will clearly be used to both help and hinder the spread of misinformation. The important takeaway is that as consumers, we must evolve our behaviour and get to grips with the fact that what we see, read or hear may not be based on evidence, or even worse, may be created for the sole purpose of deception.

Furthermore is a product & service design agency based in London. We're an evidence-led, product & service design studio founded in 2013. We believe that the research and strategy behind great design is just as important as the design itself. By gathering data and insights to inform our decisions, we help organisations create services and products based on evidence, not opinion. A hypothesis driven design framework helps us achieve this. Please get in touch if you would like to know more.

No items found.
Next Up:

More insights:

Get in touch with the team to discuss your idea, project or business.

workshop session for a service design project
service design workshop with Furthermore teamLarge Furthermore logo in white