In its latest quarterly report on adversarial threats, Meta said on Thursday that China is an increasing source of covert influence and disinformation campaigns, which could get supercharged by advances in generative artificial intelligence.
Only Russia and Iran rank above China when it comes to coordinated inauthentic behavior (CIB) campaigns, typically involving the use of fake user accounts and other methods intended to “manipulate public debate for a strategic goal,” Meta said in the report.
Meta said it disrupted three CIB networks in the third quarter, two stemming from China and one from Russia. One of the Chinese CIB networks was a large operation that required Meta to remove 4,780 Facebook accounts.
“The individuals behind this activity used basic fake accounts with profile pictures and names copied from elsewhere on the internet to post and befriend people from around the world,” Meta said regarding China’s network. “Only a small portion of such friends were based in the United States. They posed as Americans to post the same content across different platforms.”
Disinformation on Facebook emerged as a major problem ahead of the 2016 U.S. elections, when foreign actors, most notably from Russia, were able to inflame sentiments on the site, primarily with the intention of boosting the candidacy of then-candidate Donald Trump. Since then, the company has been under greater scrutiny to monitor disinformation threats and campaigns and to provide greater transparency to the public.
Meta removed a prior China-related disinformation campaign, as detailed in August. The company said it took down over 7,700 Facebook accounts related to that Chinese CIB network, which it described at the time as the “largest known cross-platform covert influence operation in the world.”
If China becomes a political talking point as part of the upcoming election cycles around the world, Meta said “it is likely that we’ll see China-based influence operations pivot to attempt to influence those debates.”
“In addition, the more domestic debates in Europe and North America focus on support for Ukraine, the more likely that we should expect to see Russian attempts to interfere in those debates,” the company added.
One trend Meta has noticed regarding CIB campaigns is the increasing use of a variety of online platforms like Medium, Reddit and Quora, as opposed to the bad actors “centralizing their activity and coordination in one place.”
Meta said that development appears to be related to “larger platforms keeping up the pressure on threat actors,” resulting in troublemakers swiftly utilizing smaller sites “in the hope of facing less scrutiny.”
The company said the rise of generative AI creates additional challenges when it comes to the spread of disinformation, but Meta said it hasn’t “seen evidence of this technology being used by known covert influence operations to make hack-and-leak claims.”
Meta has been investing heavily in AI, and one of its uses is to help identify content, including computer-generated media, that could violate company policies. Meta said nearly 100 independent fact-checking partners will help review any questionable AI-generated content.
“While the use of AI by known threat actors we’ve seen so far has been limited and not very effective, we want to remain vigilant and prepare to respond as their tactics evolve,” the report said.
Still, Meta warned that the upcoming elections will likely mean that “the defender community across our society needs to prepare for a larger volume of synthetic content.”
“This means that just as potentially violating content may scale, defenses must scale as well, in addition to continuing to enforce against adversarial behaviors that may or may not involve posting AI-generated content,” the company said.