AI’s ability to interfere in elections
The rise of AI has ushered in a new era of sophisticated disinformation tactics, particularly targeting elections. These AI-powered campaigns exploit modern information consumption habits to spread false narratives with unprecedented speed and scale.
“AI is being used to spread disinformation ahead of elections because it is a very effective and easy tool to undermine trust in politicians and political parties,” Lewis explains. “Modern information consumption through social media often provides information in a concise and focused format, perfect for spreading ‘fake news.'”
The rapid spread of unverified information poses a major challenge to those tasked with maintaining the integrity of the democratic process. The speed with which these campaigns can reach millions creates a race against time for fact-checkers to report back to cyber-experts.
AI is a growing threat, a relevance that must now be addressed: “AI’s ability to process and analyze massive data sets allows it to greatly enhance the spread of disinformation that has been around until now,” Lewis explains.
“This capability allows AI to generate content that aligns with existing disinformation narratives, thereby making the disinformation more credible.”
Similarly, AI tools are now more accessible than ever, lowering the barrier to entry for anyone wanting to launch a disinformation campaign, which means the amount of disinformation could reach unprecedented levels.
This means that a vector for more sophisticated social manipulation has been opened up.
“AI is being used to create deepfake audio clips that can be used in mobile messaging, as well as deepfake videos that make politicians appear to make inflammatory or false statements,” Lewis said.
AI-powered deepfakes utilize advanced machine learning (ML) algorithms to create highly realistic, yet completely fabricated audio and video content, posing a major threat to the dissemination of accurate information.
By seamlessly altering images, videos and audio, deepfakes can impersonate public figures and spread false information and deceptive narratives.
But it’s not just the public who should be warned: Organizations can be used as vehicles to spread disinformation. For example, if an attacker breaks into a company’s systems and gains control of some of its users’ computers, they can make the information appear genuine.
“Organizations must be aware of the heightened threat of multi-directional attacks, particularly in the context of phishing attacks,” Lewis explained. “When these attacks are launched simultaneously across multiple platforms, they can add credibility to disinformation campaigns.”
Fighting the tide of misinformation
As with other familiar social manipulation tactics in cyberspace, the problem comes from humans.
“The pervasiveness of AI means cyber leaders need to place greater emphasis on user awareness as a primary defense mechanism,” Lewis explains.
Just as companies send out information about anti-phishing campaigns to employees and urge them not to click on links from unknown senders, organizations should view outreach as addressing issues before they become a problem.
Though the challenges are significant, Duke offers hope that educated users and good AI systems can spot the signs.
“AI-generated content can be highly persuasive, but it often contains subtle errors in language, context and visual details that can expose its inaccuracies. These inconsistencies, while sometimes minor, can be identified by trained analysts or automated detection tools.”