Deepfakes and fake news: The new cyber threats you can’t see coming


An IT manager in Gurugram got a call on a normal Tuesday. The caller ID said “Head of Operations”, and the voice on the other end, which was familiar, urgently asked for a large sum of money to be sent to an unknown vendor.

He was used to getting requests like this every day, but this time something felt off. The IT manager’s suspicions were raised right away when the caller’s tone changed slightly. The IT manager sent a message to the real Head of Operations right away to ask for more information. The answer was scary: “What transfer? I haven’t said yes to any”. This technique is just one way that criminals use deepfake voice fraud to get what they want. 

This terrifying incident is not unique. Different parts of India are enduring these kinds of threats every single day. Just last year, a Delhi-based elderly woman fell victim to voice cloning criminals and lost Rs 50,000.

More recently, an Assamese influencer became a victim of one of the most unsettling deepfake cybercrimes, triggering nationwide concerns about the weaponisation of artificial intelligence, digital identity theft, and the dissolving boundary between reality and fabrication.

What we are witnessing is not simply news. These are shattered lives—collateral damage of artfully crafted deceit.

Gone are the days of clumsy headlines paired with grainy images; in today’s world, misinformation is masked by sophisticated AI technology, especially with Generative Adversarial Networks (GANs).

The emergence of AI has made it easier to generate fake sounds, images, and videos that are remarkably difficult to differentiate from authentic content, even for the most keen observers. The blurred lines between reality and fiction leave us susceptible to hidden dangers.

What if the next call isn’t just about money, but about your reputation, your job, or even your family’s safety?

The digital battleground is shifting, and the enemy wears a mask of authenticity. Because in the age of deepfakes, what you see might very well be the beginning of your undoing.

Understanding deepfake technology

The fundamental concept of deepfakes is based on a GAN, where one computer (the generator) produces misleading content, while the other (the discriminator) attempts to detect it.

To create these forgeries, all that is needed is a large dataset—whether it be images, videos, audio, or samples of the person intended to be imitated. The more data you have, the more precise the replication of facial expressions, voices, and gestures becomes. Additionally, the growth of easy-to-use apps has made it simple to create basic deepfakes, which is concerning in India, a country that has over 690 million smartphone users.

The extensive influence of deepfakes in India

The threat posed by deepfakes goes well beyond simple digital deception. In India, they pose a significant risk to the nation’s financial integrity, which is the foundation of social trust, as well as the overall stability of the country.

Traditionally, audio and video were considered irrefutable proof. However, when a fake voice can trick people into transferring money or a video of a celebrity is digitally altered to send a message that stirs up anger, trust in the media starts to decline.

This “epistemic threat”—where the validity of information is questioned—heightens confusion and increases social division.

Real-world consequences and examples

The results are certainly concerning. In India, criminals have turned to shocking tactics like “digital arrests” and KYC fraud. Throughout the internet, AI-generated content is being spread like wildfire, with no one batting an eye. A study released on April 25, 2024, reveals that a remarkable 75% of Indians have encountered deepfake content.

The darker side of technology can result in harassment and defamation. The iconic cricketer Sachin Tendulkar fell victim to a deceitful scheme, featuring in a fraudulent advertisement endorsing an online game.

A shocking number of women in India are enduring emotional distress caused by the widespread circulation of non-consensual fake pornography. The consequences can be severe, ranging from financial devastation to social scorn, and may even incite violence.

The need for vigilance and detection

As deepfakes evolve, the need for detection to keep pace becomes increasingly critical. While AI detectors are being developed, the need for human vigilance remains absolutely critical.

In India, individuals need to be on high alert for unexpected discrepancies, like unnatural eye blinks or expressions, strange hand movements or light distortions, audio-video synchronisation problems, or mechanical tones.

Additionally, verify whether the contents align with the individual’s behaviour or tone. Are the sources trustworthy? Is it credible news from mainstream media or just a dubious WhatsApp message?

Fact-checking is absolutely essential. A multitude of tools have emerged to unveil misleading information; nonetheless, the sheer magnitude of fake news presents a formidable challenge.

It is essential for users to inform platforms and fact-checkers about any content that appears dubious. Confirming information is crucial, particularly during significant events like elections or major festivities.

Government initiatives and future directions

The Indian government has recognised the threat. The Ministry of Electronics and Information Technology (MeitY) has promptly called on social media companies to develop robust detection systems and eradicate harmful practices.

Conversations are intensifying about the need to bolster IT regulations, with proposals put forward to enact laws that would make the creation of harmful deepfakes a criminal offence.

Organisations like CERT-In have raised alarms, and the surge in research on AI-based detection is becoming a hot topic among Indian tech companies and academic circles.

The “zero-trust mindset”

Nevertheless, the most powerful defence is the “zero-trust mindset.” As India’s digital footprint expands, scepticism must become the norm. The significance of digital literacy and public awareness initiatives—reaching from vibrant city hubs to the most remote areas—is immense. Inspiring people to think critically, verify information before sharing it, and reassess what they come across can contribute to a much safer online environment.

Deepfakes pose a considerable and growing danger. It is crucial for us to collaborate, engaging media outlets, technology firms, and governmental bodies to confront this challenge directly. In the current digital environment, the ability to differentiate between truth and deception is not merely a talent but a duty that we all bear.

Neehar Pathare is the MD, CEO, and CIO of 63SATS Cybertech


Edited by Suman Singh

(Disclaimer: The views and opinions expressed in this article are those of the author and do not necessarily reflect the views of YourStory.)



Source link


Discover more from News Hub

Subscribe to get the latest posts sent to your email.

Leave a Reply

Referral link

Discover more from News Hub

Subscribe now to keep reading and get access to the full archive.

Continue reading