Recently, In India, there has been a growing public discourse around deepfakes, particularly after the viral deepfake videos of Indian Prime Minister Narendra Modi, Cricket icon Sachin Tendulkar and Bollywood Actress Rashmika Mandanna. India’s Union Minister for Electronics & Information Technology, (Meity) Rajeev Chandrasekhar, has already characterized deepfakes as a “more dangerous and damaging form of misinformation”.
The ongoing 2024 Indian general election is deeply impacted by the widespread use of deepfake technology, as political parties and their affiliated organizations are using AI-generated videos to sway voter perception and disseminate misinformation among voters. On the one hand, this trend of circulating fabricated videos is helping the political parties to popularize particular narratives in their favour. On the other hand, this trend has also sparked concerns about the integrity of the electoral process, as fabricated endorsements and deceptive content proliferate on social media platforms, undermining trust in a free and fair election.
There are many instances of deepfake technology influencing the 2024 general election; in one such instance, an Instagram page uploaded deepfakes of world leaders, including former U.S. President Donald Trump and North Korean leader Kim Jong-un, saying “Jai Shree Ram” after the Ram Mandir inauguration in January 2024. Another deepfake altered Prime Minister Narendra Modi’s speech to praise businessman Gautam Adani, which was shared by the Indian National Congress’ Uttar Pradesh Instagram page. These alarming examples highlight the pervasive influence of deepfake technology in manipulating public discourse and political narratives, emphasizing the urgent need for robust regulatory measures to mitigate the spread of false information and safeguard the integrity of democratic processes in India; but before this, we need to understand Deepfake.
What is Deepfake?
Deepfake is an Artificial Intelligence (A.I.) based technology that manipulates videos, images, or audio to make them look real when they are not. It was first used for harmless fun but raised severe concerns about spreading misinformation and manipulating people. The term emerged in 2017 when a Reddit (a Social Media platform) user named “deepfakes” posted explicit videos of celebrities. Deepfakes use a technology called Generative Adversarial Networks (GAN), which is a type of machine learning. GAN learns from existing data, like videos or images, to create or change content, making it seem real. It can copy movements, facial expressions, and other details to make the fake media look convincing. Deepfakes blur the line between what is real and what is not. They usually need lots of data, often taken from the internet or social media, without their permission.
What Risks and Harmful Impact of Deepfakes
As the World Economic Forum (WEF) 2024, Global Risk Report states that fake news is the most significant immediate risk. The presence of misinformation and disinformation in these electoral processes could seriously destabilize the real and perceived legitimacy of newly elected governments, risking political unrest, violence and terrorism, and a longer-term erosion of democratic processes. The (WEF) says fake news is the Number one threat to India. The harmful impact of deepfakes is not only limited to political space but also extends to personal spaces to society at large, with women often being primary targets through the creation of nonconsensual videos that inflict severe psychological harm and intimidation. Additionally, deepfakes raise significant national security concerns as hostile nation-states leverage them to threaten public safety and sow chaos and uncertainty. Their presence also contributes to the phenomenon known as the “Liar’s Dividend,” where genuine information is discredited as fake news, further undermining trust in reliable sources of information.
What Efforts Have been made to address Deepfake in India and at International Level
Currently, In India, no specific laws directly address the use of deepfake technology. However, authorities utilize existing legal frameworks to address its misuse. Indian authorities combat deepfakes using provisions from the Information Technology Act (2000) and the Indian Penal Code, while the I.T. Rules 2023 mandate intermediary platforms to swiftly remove reported deepfake content, supplemented by government advisories urging social media intermediaries to exercise due diligence in detecting and mitigating deepfake dissemination.
On the international front, the world’s first A.I. Safety Summit 2023, held at Bletchley Park, England, involved 28 major countries, including the U.S., China, and India, emphasizing the need for global action to address the potential risks of A.I. The Bletchley Park Declaration acknowledged the risks of intentional misuse and the loss of control over A.I. technologies. The declaration emphasizes the need for international cooperation and advocates for a collaborative approach involving various stakeholders, including companies, civil society, and academia. To address AI-related risks.
A Way Forward
To effectively address the threat of deepfake technology, a holistic approach is necessary, involving social media platforms implementing watermarking for detection alongside public awareness campaigns on the dangers of fake videos and responsible sharing. Government and social media intermediaries should educate users on content policies, discourage inappropriate uploads, and advance deepfake detection technologies within comprehensive legal frameworks that balance freedom of speech and stakeholders’ interests across various sectors.
Dr. Nikhil Kumar Singhmar
Dr Nikhil Kumar Singhmar is an Author and Social Media and Political Consultant in India. He holds a PhD in social media politics from Jawaharlal Nehru University in India. His areas of interest include the discourse and narrative analysis of social media, Election strategies and Data analysis. He is also the founder of DigiPolitics.