Advertisement winmatch
Technology

How Are Deepfakes Altering Our Memories?

AI-generated fake videos and images are becoming very common. Should we be worried?

We are living in the Internet age. Every day we encounter hundreds of images and videos on social media platforms like Youtube, Instagram, and Facebook. The experts say that everything that you see on the internet is not trustworthy.

Some mind-blowing photos and videos have recently caused havoc on the internet. Have you seen a video of Jon Snow’s moving apology for the poor ending to Game of Thrones? Or a photo of Vladimir Putin kissing Xi Jinping’s hand while kneeling and the pope wearing a puffy jacket. According to ExpressVPN, these events never happened and are all deepfakes. Deepfakes are digitally produced images that are so natural that it is nearly impossible to tell them apart from actual images or videos.

Deepfakes are becoming more difficult to detect. It means they can be used to manipulate and alter people’s memories. This phenomenon is known as the Mandela Effect. These AI-based images and videos blurred the line between fact and fiction. Deepfakes are getting very common, and we should be worried about them.

Let’s dive into details on how this AI technology affects our memories and how deepfake relates to the phenomenon known as the Mandela Effect.

What Is The Mandela Effect?

The Mandela Effect refers to a situation in which many people believe an event occurred when it actually did not. The term was created in 2009 by Fiona Broome, after she realised that she, along with a number of others, believed that Nelson Mandela had died in the 1980s despite ample proof that he didn’t. In reality, he was released in 1990, became president of the southern African country, and ultimately passed away in 2013.

The term “Mandela Effect” was first described in 2009 by Fiona Broome when she created a website to detail her observance of the phenomenon. Broome was shocked that such a large number of people could remember the same identical event in such detail when it never happened.

Since then, the Mandela Effect has been used to describe a wide range of false memories of facts or events, ranging from the storylines of films or television series to historical events.

Some prime examples of the Mandela effects are:

Pikachu

Many people remember Pikachu, a Pokémon character, as having a black-tipped tail. However, in reality, the character has always had a yellow tail.

Location of New Zealand

Where is New Zealand in reference to Australia? If you look at a map, you will see that it is southeast of Australia. However, a huge chunk of people memorise New Zealand being northeast instead of southeast.

What exactly causes the Mandela Effect?

Numerous theories have been proposed in an effort to explain the Mandela Effect, despite the fact that its exact origin is unknown. One theorizes that it’s the product of false memories, where people mistakenly recall events or facts as a result of incomplete knowledge, poor interpretation, or the influence of suggestion.

A more radical idea holds that it is the result of a defect in the matrix or a parallel universe. This idea related to quantum physics says that rather than one timeline of events, alternate realities or universes may be taking place and blending with our timeline.

The Mandela Effect may be brought on by how our brains are wired. Confabulation is a process that involves our brain filling in gaps that are missing in our memories to make more sense of them. This isn’t lying, but rather remembering details that never happened. 

The Mandela Effect is not a recent phenomenon, but its reflection has grown in this digital age. Information accessibility is easy these days. However, this spreading of information comes with the possibility of misconceptions and fallacies.

Now the big question is, Can we trust the information we see online? Is it fine to use AI to manipulate images and videos? Is it fine to engage in deepfaking? Let’s discuss this.

The danger of deepfake-derived memories

Deepfakes are digitally produced images that are so natural that it is nearly impossible to tell them apart from actual images or videos. The underlying technology is deep learning, a branch of machine learning that entails putting artificial neural networks through a lot of data training.

It is difficult to make a good deepfake on a typical computer. Most are created on high-end desktops with powerful graphics cards with access to the cloud.

Less than 15,000 deepfakes have been found online as of 2019. The AI firm Deeptrace found 15,000 deepfake videosonline in September 2019, which doubled in just nine months. According to the World Economic Forum, that number is now in the millions, and the number of expertly made deepfakes is growing at a rate of 900% every year.

One of the biggest concerns that come with deepfakes is their use for malicious purposes like creating hate news, fake propaganda, or imitating someone for monetary gain.

This is being used to create deepfake pornography, which has raised big concerns about the exploitation of individuals. As per Danielle Citron, a professor of law at Boston University, Deepfake technology is being used against women.

Deepfakes are dangerous. They have the power to make people believe they have seen something that never actually happened. This is definitely not an ideal scenario in the world we live in.

Now, let’s have a look at examples of how deepfakes contribute to the Mandela Effect and the dangers it brings:

1) Creating fake news stories

2) Convincing political opinion

3) Driving fake propaganda and campaigns

4) Modifying historical footage and spreading it over the internet

5) Manipulating social media content

6) Creating false proofs (confessions or statements)

7) Creating fake scientific evidence

We live in a world where our brains are driven by the content we see on the internet. We believe what we see. The biggest impact of deepfakes is the creation of a zero-trust society, where people cannot distinguish between truth and deception.

So, it becomes imperative for us to remain assured of the information we consume on the internet. It is absolutely important to detect and prevent the spread of deepfake videos and images. 

Here are a few proven ways to spot a deepfake and protect yourself from the spread of adulterated information and false memories: 

1) Look for unnatural eye movements

2) Notice mismatches in color and lighting

3) Compare and contrast audio quality

4) Strange body shape or movement

5) Unctuous facial movements

6) Abnormal positioning of facial features

7) Embarrassing posture or physique

8) Check Metadata 

9) Unnatural or uncommon situations

10) Stay Updated and beware

Understanding what deepfakes are is one thing, but identifying them is different. With the technical progression, deepfakes will also evolve, so it is important to stay aware of best security practices.

Heana Sharma

Heana Sharma: A rising talent, Heana boasts 2 years of versatile content writing experience across multiple niches. Her adaptable skills result in engaging and informative content that resonates with a wide spectrum of readers.

Related Articles