National Cyber Warfare Foundation (NCWF)

Adversaries generative AI use isn t fooling the masses


0 user ratings
2024-09-23 23:28:03
milo
Blue Team (CND)

 - archive -- 

Intelligence officials said Monday that generative AI has thus far been a “malign influence accelerant” and not a “revolutionary” tool.


The post Adversaries’ generative AI use isn’t fooling the masses  appeared first on CyberScoop.



U.S. intelligence officials report that despite  Russia, China and Iran ramping up AI-generated content meant to influence the 2024 election cycle, they lag in generating convincing material that can fool existing detection tools.





In their fourth election-related briefing this year, officials from the Office of the Director of National Intelligence and the FBI told reporters that they continue to observe Russian and Iranian actors using generative AI in attempts to trick U.S. voters and sow discord. 





While the advent of generative AI have yielded some improvement to these operations, such as translating content into other languages, intelligence officials described generative AI as a “malign influence accelerant” but not yet a “revolutionary” tool.





While those countries have created plenty of AI-generated propaganda, they have yet to break through a number of obstacles that are preventing them from leveraging the full power of the nascent technology to deceive voters.





“The risk to U.S. elections from foreign AI-generated content depends on the ability of foreign actors to overcome restrictions built into many AI tools and remain undetected, develop their own sophisticated models or strategically target and disseminate such content,” said a senior ODNI official Monday. “Foreign actors are behind in each of these three areas.”





Officials provided little further details on why these nations have seemingly struggled in those areas, but did note that tools built to detect and identify synthetically manipulated media have been effective at catching them thus far this year.





“What we can say is part of the reason that informs our judgment about AI being an accelerant is that the quality is not as believable as you might expect,” the ODNI official said. “You’re often able to identify it with various tools, so I’ll leave it there.”





Russia remains the most active adversary using AI, pumping out the most content and doing so across text, audio, imagery and video. Federal agencies and commercial threat intelligence companies have identified numerous, high-volume campaigns run by different groups linked to Moscow’s election influence operations, including Doppelganger and RT, a Russian state media organization.





Iranian actors have used the tools to generate social media posts and mimic news organizations. The work has targeted English-speaking and Spanish-speaking voters, attempting to polarize opinions on the presidential candidates and issues like the Israel/Gaza conflict.





China,  conducted an extensive AI influence operation during Taiwan’s elections earlier this year, and is now using AI to shape global views of China and amplify divisive U.S. political issues. However, intelligence officials said they have not observed China-linked actors using AI to target or influence U.S. elections.





With the rise of generative AI tools over the past two years, experts have rushed to develop their own software that can accurately detect and flag fake or manipulated media.





Because many of these tools were also built using AI, experts have warned that validating authentic media would turn into a perpetual cat-and-mouse game, with bad actors constantly adapting their systems to evade the latest detection techniques.





Thus far, that hasn’t happened. In countries like Taiwan, India, the United States and others, foreign attempts to mislead voters using deepfake media have often been quickly identified as digital forgeries, casting significant doubt on their authenticity.





Intelligence officials declined to provide details on the efforts’ scope or impact, saying such analysis would in part require them to monitor social media activity that falls under First Amendment free speech protections.





But U.S. officials said they are closely monitoring for signs that bad actors have improved their efforts , whether it’s by creating their own powerful models or finding ways to more effectively amplify content..





Another senior ODNI official confirmed that part of those efforts do include conversations with AI companies who have tools “that could be used across the lifecycle of a foreign influence campaign.”





“Our exchanges with technology companies focus on evolving foreign adversary tools, tactics and procedures. We also talk about authentication and attribution approaches, which is a helpful place for us to compare notes,” the second ODNI official said.





Even as generative AI has made it easier to create convincing fake media, countries like Russia still rely on less technically-advanced manipulation methods that don’t require sophisticated algorithms.For example, ODNI officials said one of the most high-profile examples of Russia using synthetically manipulated media to target Vice President Kamala Harris – a video purporting to show her involved in a hit-and-run car accident in 2011 – was a staged event and did not use AI. That bolsters reporting from Microsoft, which said earlier this month that the video was crafted using paid actors and spread through a fake news outlet  created by Russian influence actors.


The post Adversaries’ generative AI use isn’t fooling the masses  appeared first on CyberScoop.



Source: CyberScoop
Source Link: https://cyberscoop.com/foreign-ai-use-isnt-fooling-masses/


Comments
new comment
Nobody has commented yet. Will you be the first?
 
Forum
Blue Team (CND)



Copyright 2012 through 2024 - National Cyber Warfare Foundation - All rights reserved worldwide.