Deepfakes are harder to spot now

What are deep fakes?

The word was put forward by a New York Times Editorial Board member earlier this year. They’re digital fakes that have the same characteristics as high-quality recordings. 

The only difference is that they don’t feature someone who has given the okay for their likeness to be used in the fake recording.

A video of former Defense Secretary James Mattis, for example, showed iced tea pouring from his mouth while talking about how war doesn’t solve anything and it’s not our way of life.

How are Deepfakes created?

Deepfakes are created by combining a video with a computer-generated voice. CNN’s Don Lemon was the latest target of a deepfake, with his face blurred and a computer-generated voice reading “I have heard your concerns and I am listening.”

The problem is what you can use to create the deepfake. To demonstrate what they can look like, the first example was created using Niki Minaj’s “Anaconda” video. It’s very easy to spot.

Why do they call it Deepfake?

The term deepfake uses old software. It was named FakeApp because it was coded to manipulate faces in photographs.

What do they use it for?

Well, the original intention of the FakeApp was to enable video artists to create video mashups and new music videos. Something like this would allow people to insert their faces into a movie scene or a music video.

How to identify a Deepfake?

You need to look for changes in the eyes, mouth and tone of voice. In addition to that, you’ll need to assess if the image is real or not.

Despite their increased sophistication, deep fakes, according to Norton, can be discovered.

One of the most straightforward methods is to right-click the image and select “search the web for an image.”

A search like this may turn up additional projects that use the same image but have a different name linked with the profile.

It’s also a good idea to double-check the name’s credentials, education, and professional experience.

Other approaches for detecting deep fakes necessitate a thorough examination of the image, looking for any pieces that appear out of place.

Unnatural colouration, a lack of emotion, or false emotion, for example. Hair is one of the most difficult things for a computer to duplicate, thus it’s a fantastic area to concentrate on.

What can be done to stop them?

It’s tricky, but a simple audio feature could help. 

When someone is recording their voice on a smartphone camera, a technician analyzes how it sounds and then uses that as data for building a fake voice.

In a BuzzFeed article we saw former USA President Mr Barack Obama talking but it was a deepfake. If we didn’t mention you wouldn’t really know the difference. We’re in an era in which our enemies can make anyone say anything at any point in time. Check out the below video,