Misinformation is expected to be among the biggest cyber risks in elections in 2024.
Andrew Brooks | Image source | Getty Images
Britain is expected to face a barrage of state-backed cyberattacks and disinformation campaigns as it heads to the polls in 2024 — and artificial intelligence poses a major risk, according to cyber experts who spoke to CNBC.
The British will vote on May 2 in local elections, and general elections are expected to be held in the second half of this year, although British Prime Minister Rishi Sunak has not yet committed to a specific date.
The election comes as the country faces a host of problems including a cost-of-living crisis and stark divisions over immigration and asylum.
“With most UK citizens voting at the polls on Election Day, I expect the majority of cybersecurity risks to emerge in the months leading up to the day itself,” Todd MacKinnon, CEO of identity security firm Okta, told CNBC via email. .a
It wouldn't be the first time.
In 2016, the US presidential election and Brexit vote were found to have been disrupted by disinformation shared on social media platforms, allegedly by Russian state groups, although Moscow denies the allegations.
Since then, government actors have launched routine attacks in various countries to manipulate election results, according to cyber experts
Meanwhile, the UK claimed last week that Chinese state hacking group APT 31 attempted to access UK lawmakers' email accounts, but said such attempts were unsuccessful. London imposed sanctions on Chinese individuals and a technology company in Wuhan believed to be a front for APT 31.
The United States, Australia and New Zealand followed suit by imposing sanctions. China has denied the allegations of state-sponsored hacking, calling them “baseless.”
Cybercriminals use artificial intelligence
Cybersecurity experts expect malicious actors to interfere in the upcoming election in a number of ways – not least through disinformation, which is expected to be worse this year due to the widespread use of artificial intelligence.
Experts say synthetic images, videos and audio created using computer graphics, simulation methods and artificial intelligence – commonly referred to as “deepfakes” – will be commonplace as it becomes easier for people to create them.
“Nation-state actors and cybercriminals will likely use AI-powered identity-based attacks such as phishing, social engineering, ransomware, and supply chain compromises to target politicians, campaign staff, and election-related organizations,” Okta's McKinnon added. a
“We are also certain to see an influx of AI and bot-based content generated by threat actors to spread misinformation on a greater scale than we have seen in previous election cycles.”
The cybersecurity community has called for increased awareness of this type of artificial intelligence-generated misinformation, as well as international cooperation to mitigate the risks of such malicious activities.
Highest election risks
AI-powered disinformation poses a significant risk to the election in 2024, said Adam Myers, head of counter-adversarial operations at cybersecurity firm CrowdStrike.
“Right now, generative AI can be used for harm or for good, and so we see both applications being increasingly adopted every day,” Myers told CNBC.
China, Russia and Iran are very likely to conduct disinformation and disinformation operations against various global elections with the help of tools such as generative artificial intelligence, according to the latest annual threat report from Crowdstrike.
“This democratic process is very fragile,” Myers told CNBC. “When you start looking at how hostile nation-states like Russia or China or Iran can leverage generative AI and some of the newer technologies to craft messages and use deepfakes to create a compelling story or narrative for people to buy into, especially when people already have that kind of confirmation bias, “It is very dangerous.”
The main problem is that AI is lowering the barriers to entry for criminals looking to exploit people online. This has already happened in the form of phishing emails created using easily accessible AI tools like ChatGPT.Â
Hackers are also developing more advanced – and personalized – attacks by training AI models on our private data available on social media, according to Dan Holmes, a fraud prevention specialist at regulatory technology firm Feedzai.
“You can train these voice AI models very easily… through exposure to social media [media]”It is,” Holmes told CNBC in an interview [about] Getting that emotional level of engagement and coming up with something really creative.”
In the context of the election, a fake AI-generated audio clip of opposition Labor leader Keir Starmer abusing party staff was posted on the social media platform Charitable Truth Full Fact.
It's just one example of many deepfakes that have cybersecurity experts worried about what's to come as the UK approaches elections later this year.
Elections are a test for tech giants
However, deepfake technology has become much more advanced. For many technology companies, the race to conquer them is now about fighting fire with fire
“Deepfake technology has gone from being a theoretical thing to being live in production today,” Onfido CEO Mike Tuccin told CNBC in an interview last year.
“There's now a game of cat and mouse where it's 'AI versus AI' – using AI to detect deepfakes and mitigate the impact on our customers is the big battle now.”
Internet experts say it's becoming harder to know what's real, but there may be some signs that content is being digitally manipulated.
AI uses prompts to generate text, images, and videos, but it doesn't always do it right. So, for example, if you're watching a video of an AI-generated dinner, and the spoon suddenly disappears, that's an example of an AI glitch.
“We will certainly see more deepfakes throughout the election process, but an easy step we can all take is to verify the authenticity of something before sharing it,” Okta's McKinnon added.