Series Analysis - India: Case of Influence Operations and Disinformation

By Nikhita Nainwal and Prakriti Singh

ABSTRACT

The year 2024 constitutes the biggest election year in history with 83 elections across 78 countries with more than 4 billion people expected to go to the polls, which represents nearly half of the world population.

In this new series, London Politica’s Intelligence Support Group and the Emergent Technologies department collaborate to provide in-depth case studies based on OSINT (Open Source Intelligence Analysis) investigation. 

The Indian election of 2024 is quite possibly the largest exercise in democracy that the world will witness with 969 million eligible voters. While having such a mighty electorate is a herculean task when it comes to campaigning and conducting the elections itself, it is a task the Election Commission of India (ECI) has done numerous times before. 

What complicates this election in particular, however, is the sophistication of technology that is now available to candidates and the public. Adobe’s “Future of Trust Study for India” highlights the sheer scale of this misinformation. The study interviewed 2000 resident Indians with the aim of understanding the impact of Generative AI and misinformation in the context of the ongoing Indian elections. The results are fairly astonishing given it's an election year. 81% of Indians interviewed believed that the content they see on a daily basis is altered in some way.

EXECUTIVE SUMMARY

  • During the elections, the Chief Election Commissioner also took a pro-active approach and launched an awareness campaign under the banned “Verify Before You Amplify”

  • On multiple instances, false information on election schedules, monetarily penalising for not voting and missing Electronic Voting Machines (EVMs) were spread on social media and later debunked. Several doctored tweets and videos were also spread.

  • Fact-checkers have highlighted that in most cases, the pictures and videos were faked through editing or mislabeling techniques to make the content misleading, colloquially called ‘Cheapfakes’. AI-generated cases of disinformation on the other hand were very small in number.

  • Moreover, fact-checkers have also conveyed dissatisfaction with the new policies by Meta and X to tackle mis/disinformation.

Previous
Previous

Series Analysis - Mexico: Case of Influence Operations and Disinformation

Next
Next

Series Analysis - European Union: Case of Influence Operations and Disinformation