graphic of a person slipping a piece of paper into a cyber box with a background of lights the are connected
societal impact

Breadcrumbs

Election 2024: Cybersecurity showdown in the age of AI

Why it matters:

The upcoming presidential election isn’t just a contest of candidates, but a cybersecurity showdown in which AI takes center stage. Assistant Professor of Practice Javad Abed explores the risks of system vulnerabilities.

The 2024 election is approaching, and the challenges have never been more significant. This isn’t just a contest of candidates, but a cybersecurity showdown in which artificial intelligence takes center stage.

As AI is making deepfakes, spreading misinformation, and facilitating sophisticated cyberattacks, the integrity of our democratic process is at risk. A joint report from the Attorney General and the Secretary of Homeland Security emphasizes the persistent threat posed by foreign governments trying to undermine the security of U.S. elections. It has never been more urgent to secure an election against digital threats.

AI: The double-edged sword

When it comes to election security, artificial intelligence can be a powerful defender. It can detect threats, and identify fake news. But when used maliciously it’s also a formidable disruptor: spreading disinformation, creating fake social media accounts, and automating attacks on electoral infrastructure. Its potential to create realistic content such as deepfakes is particularly dangerous, able to deceive voters and damage the reputations of candidates.

What to Read Next

What’s on the line?

The risks of AI-driven interference stretch beyond one election. The consequences could be severe if digital manipulation undermines the voting process or changes public perception, leading to a loss of trust in the electoral system and democratic institutions. Foreign adversaries could exploit system vulnerabilities to create chaos, foment division, and destabilize the nation, creating lasting scars on the fabric of democracy.

In a 2023 report about the efforts of foreign governments to influence or interfere with the 2022 elections, Director of National Intelligence Avril Haines explains that “foreign actors” have been found to “induce friction and undermine confidence in the electoral process that underpins our democracy.”

Haines says, “As global barriers to entry lower and accessibility rises, such influence efforts remain a continuing challenge for our country, and an informed understanding of the problem can serve as one defense.”

The high cost of complacency

If these AI-driven threats are not addressed, democracy could be exploited. AI technology is developing quickly with its own challenges, but traditional security measures have trouble keeping up. The cost of not acting is high: trust in elections may erode, voter turnout may decline, and the democratic system itself will weaken.

Paving the way for secure elections

It will take a multi-faceted approach to protect the 2024 election and future democratic processes. Election officials should be able to use multifactor authentication, and governments and technology companies should work together to fortify electoral infrastructure and leverage AI to detect anomalies in real-time. We must audit and stress-test voting systems in order to find and fix vulnerabilities before they can be exploited.

It’s not just about technical defenses; digital resilience needs to be a matter of culture. Citizens need to be able to identify and critically navigate the digital landscape. A long-term defense strategy against AI-generated manipulation is building a well-informed public who can make decisions based on fact instead of fiction.

A call for action

The election of 2024 is a moment to define democracy in the AI era. Regardless of which candidate wins, the future of fair elections and the resilience of democratic institutions will be determined by our ability to adapt and respond to these new digital threats. To defend democracy will take more than just reacting; instead, we need proactive measures, strong collaboration, and a commitment to protecting the electoral process for generations to come.

Protecting our electoral process from digital threats as we evolve means more than just technological defenses. It will require a community effort from all across society. Lawmakers need to work toward crafting regulations that address the potential misuse of AI, and in doing so set clear legal consequences for those who manipulate—and interfere with—elections. Electoral systems must be secured, and broad-based misinformation combatted, and cybersecurity experts and tech companies must lead the way in developing and sharing best practices to do so. Educational institutions and media organizations also have a duty to educate citizens on detecting fact from fiction in an AI content space.


portrait

Javad Abed holds a PhD in Information Systems and is a certified professional in cybersecurity and cloud computing. He is dedicated to teaching courses that explore both the technical and managerial facets of Information Systems. His research interests span behavioral and design science issues as well as cybersecurity, focusing on practical impacts in various settings.

 

Discover Related Content