Road to elections paved with AI minefield

Road to elections paved with AI minefield

Pakistan

Road to elections paved with AI minefield

Follow on
Follow us on Google News

By Arooj Anmol

In a global landscape where over 64 countries, alongside the European Union, are engaging nearly half of the world's population in elections, the stakes are high.

With more than 4 billion people participating in national elections, outcomes carry profound implications. The Democracy Index by the Economist’s Intelligence Unit reveals that out of the 43 countries expected to conduct free and fair elections, 28 fall short of meeting essential democratic conditions. Noteworthy nations such as the US, the UK, Russia, India, and South Africa are in the throes of critical contests with far-reaching consequences.

As the world moves toward election season, the spectre of misinformation and disinformation looms large. A World Economic Forum (WEF) survey identifies misinformation as the second most severe global risk over the next two years.

When combined with Artificial Intelligence (AI), this threat becomes a potent force capable of causing global disruption and catastrophe. AI blurs the lines between fact and fiction, fostering hate, polarisation, and criminal activities.

The use of AI tools in elections grapples with the challenge of ensuring voter engagement, free expression, and electoral autonomy amidst a rising tide of authoritarianism.

How AI can influence elections

AI tools, adept at generating content like videos and images without the need for costly consultants, present a double-edged sword. The unsettling reality is that deepfakes and synthetic media can significantly influence voter mindset, behaviour, and ultimately sway election results, eroding trust in the bedrock of democracies – reliable information.

OpenAI, the proprietor of AI image generators like DALL-E, facilitates the customisation of audio and video content through generative AI tools. Unfortunately, these tools, capable of creating diverse online content in response to user prompts, hold the potential to exacerbate social media disinformation in unprecedented ways.

Political parties are increasingly leveraging tools like ChatGPT for campaigns, speeches, and slogans, accelerating the creation and dissemination of disinformation at a lower cost. Historically, online disinformation has impacted voting behaviour, party support, and even legislative changes.

The advent of synthetic media is poised to amplify this influence. Generative AI in the hands of adversaries poses threats at various stages of the electoral process, from voter registration to casting votes and reporting results.

Also read: Social Media: More A Bane Than A Boon

Across the globe, elections face challenges such as opposition constraints, weary electorates, and manipulation, turning the fate of democracy into a central campaign issue.

Foreign Information Manipulation and Interference (FIMI) emerges as a particularly harmful form of information disorder, with AI enhancing its efficacy in influencing elections.

In the current geopolitical climate, the risks associated with election misinformation have never been more pronounced, leading to an erosion of trust in election integrity and a decline in confidence in democracy – a global challenge transcending borders.

Pakistan’s case

In Pakistan, where internet penetration reached 49.8 percent by November 2023, the increased use of social media coupled with low media literacy poses significant risks.

In the realm of journalism, influencer-generated content, primarily political commentary, blurs the lines between opinion and factual reporting, creating challenges for consumers to discern misinformation and disinformation from trusted sources.

Usages of AI during elections

Instances of AI use during elections to gain political advantages abound. In the US, Trump shared a manipulated video in May using AI voice-cloning of CNN host Anderson Cooper.

Slovakia experienced pro-Kremlin social media accounts disseminating AI-generated deepfake audio recordings during its September elections.
Bangladesh witnessed deepfake videos surfacing after Prime Minister Sheikh Hasina secured her fourth consecutive term in January.

In Mexico, President Andrés Manuel López Obrador spread false information against his opponent ahead of the June elections.

The former prime minister in Pakistan utilised generative AI during a virtual rally, featuring AI-generated audio. He also claimed authorship of a write-up in The Economist, stating it was written by AI.

Regulations

To address the impact of AI on content creation, companies like Meta, TikTok, Microsoft, and YouTube are actively implementing safeguards. Requirements for creators and political advertisers to disclose the use of AI have been introduced.

Governments and international initiatives are working on regulatory frameworks, including the Biden administration's executive order on AI, the UK's AI Safety Summit, and the European Union's AI Act set to be enforced by 2025.

Amidst this tumult, a positive development emerges in the fight against misinformation and disinformation. Pakistan introduces 'iVerify,' recognised globally as a UNDP source verification tool. This comprehensive tool aids in determining the accuracy of information, content, and news by assessing its truthfulness.

Each story undergoes scrutiny by two fact-checkers to mitigate individual bias and ensure a thorough examination. With a specific focus on election-related claims, iVerify contributes to the integrity of the democratic process, aiming to facilitate transparent and fair elections by providing real-time factual information.

In a notable move, OpenAI declares a prohibition on the use of its technology, including ChatGPT and DALL-E 3, for political campaigns. Instead, they plan to adopt the Coalition for Content Provenance and Authenticity’s digital credentials, utilising cryptography to encode information about the content's origin.

This coalition, known as C2PA, seeks to enhance techniques for identifying and tracking digital content and includes members such as Microsoft, Sony, Adobe, Nikon, and Canon.

Way Forward

The current challenge lies in striking a balance between counter-disinformation laws and fundamental human rights, particularly freedom of expression. The absence of a robust legal framework for AI use in political campaigns in Pakistan exacerbates the issue.

Urgent action is required, with a united effort from authorities, tech companies, and civil society to combat the growing threat of AI-driven disinformation.

Conducting widespread educational campaigns to enhance digital literacy and empowering citizens to discern truth is crucial. Social media platforms must implement effective content moderation policies against misinformation.

Building an informed electorate with critical thinking skills is vital for a resilient democracy facing manipulation by AI. The choices made today will determine whether AI is utilised for informed engagement and citizen empowerment or becomes a tool to sow discord and manipulate public opinion.

A collective commitment is necessary to safeguard the electoral process and uphold the essence of democracy in the face of this unprecedented technological challenge. 




Advertisement