Microsoft introduces new tool that can recognize and catch a Deepfake

  • Twitter
  • Facebook
  • Google+
  • Pinterest

Microsoft has successfully developed new software that will catch Deepfake images. Deepfake is a computer-generated image that at least to the human eye looks absolutely original or real. With recent advancements in machine learning algorithms and AI-powered GANs. Deepfakes are getting harder and harder to catch which poses a real threat to cybersecurity.

In today’s sage, identity theft is not only limited to physical connections. Earlier hackers tried to steal information like bank accounts, credit card pins and email address passwords, etc. They used techniques like phishing, to make a copy of important webpages like bank portals and email providers to scam people. Now with the advancement in tech, the hackers are also getting smarter.

While that threat still stands, we have newer problems, today most of our phones, laptops, offices, and even homes rely on biometrics. Biometrics means the human voice, fingerprints, and the face. These things unlock our phones, authorize transactions, and authenticate many of our daily tasks.  We use biometrics because they are specific to every one of us, and they can not be cloned. At least that is what we believed in.

deepfake
Image: MIT Technology Review

 

The new software from Microsoft analyzes video frames and pictures to grade them on a certain scale. The generated score gives an idea of the originality of the image or video in question. Microsoft hopes that this new tech will help “combat disinformation,”.

However, besides the new detection algorithm experts are still concerned that it will get outdated pretty soon.  The reason for that is faster advancements in deepfake technology right now than ever before. Microsoft does have another plan in place to counter this problem.  The company is working on a process by which developers will be able to attach secret code with their footage. In this way, if the content has tampered with it will be easily flagged.

Development in Deepfake Detection :

Deepfakes first appeared in two years ago when a computer developer used advances AI techniques to create a face-swapping software. These programs take two things as an input. One is the source video while the other is pictures of the person whose face you want in the generated video. The programs use still images to replace the face of the person in the source video. This type of software can even recreate facial expressions for lip-synching and other subtle human motions.

Nowadays face swapping is even more optimized and does not need multiple photo samples or immense computing power. There are numerous applications that perform this procedure even on smartphones. Therefore proper regulation and detection are more than important right now to ensure public safety.

The sophisticated deepfake technology poses numerous threats to every person. While there may be some entertainment advantages this technology can be seriously misused. People can make fake videos that look authentic to threaten others in power. Imagine political rivals releasing fake videos of a person saying exactly opposite things of what they stand for.

Moreover, most modern smartphones and other technologies use facial recognition to work and deep fakes open a new risk in that department too. Nowadays people can also fake voice using AI algorithms. Imagine the effect that would have on a singer if someone released a fake song in their name.

Image Credit: popularmechanics.com

To counter all these problems tech giants like Apple and Microsoft have been at work for a long time. This system can surely harm many of the security systems at work in important places. People can use fake identities at airports, banks, and other important places.

The problem is, however, that no matter how close we get to catching these we will always have to continue more. For every captured, fake, more sophisticated, and improved deepfakes will come.

 

 

 

 

Leave a Reply

Your email address will not be published.
Required fields are marked *

error: Content is protected !!