Technology

Social media platforms on the defensive as Russian-based disinformation about Ukraine spreads

Kremlin-backed falsehoods are spreading across the world’s largest tech platforms and putting the companies’ content policies to the test.

Social media apps are seen on a mobile phone.

The world’s biggest social media companies are scrambling to combat a global barrage of Kremlin-backed falsehoods and digital tricks around the invasion of Ukraine — putting the tech giants back in the political crosshairs over the spread of online disinformation.

Russia-backed media reports falsely claiming that the Ukrainian government is conducting genocide of civilians ran unchecked and unchallenged on Twitter and on Facebook. Videos from the Russian government — including speeches from Vladimir Putin — on YouTube received dollars from Western advertisers. Unverified TikTok videos of alleged real-time battles were instead historical footage, including doctored conflict-zone images and sounds.

These debunked posts have been racking up millions of likes, comments and shares on Facebook and Twitter, according to CrowdTangle, a social media analytics tool owned by Meta, and POLITICO’s separate review of TikTok and Google’s YouTube.

Social media companies are already under pressure from politicians in both the U.S. and Europe who argue that falsehoods ranging from Covid treatments to voting fraud — and misinformation, more generally — provide justification to curtail the industry’s liability protections, break the large tech companies up or otherwise rein them in by demanding more transparency about their operations.

Now, the conflict in Ukraine is fast becoming a proving ground over pledges these firms have made to clamp down on disinformation, especially given their insistence they now have a playbook that works.

But plenty of disinformation is still getting through.

Russia’s top five international state-backed media outlets have used Facebook and Twitter to share debunked reports claiming that the Ukrainian military had committed unprovoked attacks on Russian-allied forces. They also suggested NATO countries would carry out so-called false flag chemical weapons attacks in Ukraine’s breakaway republics to tarnish Russia’s reputation.

Over the last seven days, these outlets garnered up 4 million engagements — likes, shares and comments — on their Facebook posts. During the same period, Fox News’ main Facebook page received 3.8 million engagements.

“We see that there are many, many attempts to blame Ukraine for killing civilians, saying that the Ukrainian army is trying to attack,” said Liubov Tsybulska, founder of Ukraine’s Center for Strategic Communication, which tracks so-called hybrid threats of both cyber attacks and disinformation.

“Propaganda activities have intensified largely for the last few weeks,” said Tsybulska, who now advises the Ukrainian strategic communications center.

Lawmakers in the U.S. and Europe are paying attention, with new online content rules almost complete in Europe and American lawmakers increasingly questioning whether social media companies are doing enough to protect people online.

“It is deeply concerning that pro-Russia disinformation is reported to have more than doubled in the region in recent weeks,” said Rep. Adam Schiff (D-Calif.), chair of the House Intelligence Committee, referring to U.K. foreign minister Liz Truss’ statements at the Munich Security Conference. “Social media companies must quickly expand efforts to detect Russian falsehoods and prevent their platforms from being exploited in the conflict.”

Falsehoods are also circulating on smaller platforms like Telegram, the encrypted messaging service, where Russian and Ukrainian channels routinely share debunked claims about the Kyiv regime. But Telegram — which did not respond to a request for comment — does not have the same reach, in terms of overall users, as the mainstream platforms.

Facebook, YouTube,Twitter and TikTok have invested billions of dollars over the past four years to improve algorithms that flag problematic posts and to hire thousands of contractors to scour the networks in search of harmful content.

They also have taken specific actions against state media from authoritarian regimes, including those from Russia. That involves adding labels to the outlets’ Twitter accounts, YouTube channels and Facebook pages to inform readers about who’s behind such content.

The Eastern European conflict is putting those protocols to the test.

Twitter spokesperson Paolo Ganino said its safety and integrity teams were watching for risks associated with conflicts in the region. TikTok spokesperson Sara Mosavi said the video-sharing platform was removing content that promotes violence or harmful misinformation, though she did not give specifics on the type of material the company had removed.

Françoise Ballet-Blu, a French lawmaker, said via Twitter that one of her Ukrainian-focused TikTok videos had been removed, mistakenly, for breaking the platform’s content rules. TikTok did not respond when reached for comment.

YouTube said its teams were watching for so-called false flag operations, deceptive practices, hacking, phishing and incitement to violence. As of Wednesday, the company said it had not detected a “meaningful increase” of coordinated influence operations tied to Russian-linked misinformation about Ukraine.

Despite these efforts, a team within the EU’s diplomatic service that tracks Russian disinformation said it had seen a major increase in Kremlin-backed online disinformation since late January. That includes false narratives accusing Kyiv of conducting chemical attacks on the breakaway republics and how Moscow had entered those disputed regions in a peace-keeping capacity.

By Wednesday, some of the companies announced new actions as the signs of an imminent invasion increased.

YouTube banned a channel owned by Donetsk separatist leader Denis Pushilin for breaking its community standards.

Meta, Facebook’s parent company, said in a statement Thursday it had formed a new unit — created during times of conflict — to respond to potential issues across its platforms staffed with Ukrainian and Russian-speaking experts who could quickly respond to content violations, though did not immediately respond when asked about potential next restrictions on Russian state media outlets. Meta had created a similar unit in response to the Afghanistan conflict in August 2021.

The company also now allows Ukrainians to lock their Facebook profiles to combat potential cyberattacks, company spokesperson Toby Partlett said.

Still, those steps are small compared with the waves of Ukraine-related disinformation circulating online.

Unattributed videos and photos spreading on the platforms after Russia’s invasion Thursday falsely claimed to be footage of the Russian invasion. One such image showed fighter planes, with 200,000 views on Twitter, was actually from a 2020 airshow. Another — now removed on Twitter, but viewed over 300,000 times — was from a video game “War Thunder,” according to analysis from FirstDraftNews, a nonprofit that tracks disinformation.

Russian-linked disinformation posts on Facebook could become particularly problematic for the company, given documents leaked by Facebook whistleblower Frances Haugen last year that suggested that misinformation in Ukraine had not been a priority for the company.

An undated, leaked document titled “Country Prioritization for 2021” ranked countries from level Tier 1 to Tier 3 for the type of internal content moderation and monitoring the company offered to protect local users. While Russia was in the highest bracket, Ukraine did not appear in any of the tiers within the document. Meta declined to comment when asked if Ukraine was now on its internal priority list.

More broadly, the tech companies have not been transparent about real-time actions they’ve taken to dispel Russian state-run disinformation, according to Jesse Lehrich, co-founder of Accountable Tech, a watchdog group, and former spokesperson for Hillary Clinton.

“We don’t really know the extent to which they’ve changed internal policies or treatments, or if they’re making new interventions, and what those interventions are,” he said.

As disinformation continues to spread on the platforms, Meta indicated on Wednesday it’s beginning to comply with Russia’s new “foreign IT company” requirements that it register with Russia’s communications regulator — a move that had been months in the making.

Western governments are taking increasingly harsh measures to block Russian disinformation. The European Union imposed sanctions Wednesday, for instance, on Margarita Simonyan, the head of RT’s English-language division, as well as on Yevgeny Prigozhin and the Internet Research Agency, the St. Petersburg-based “troll factory” that he bankrolls.

A senior EU official, speaking on the condition of anonymity to discuss sensitive internal deliberations, said there was an “astonishing” level of coordination between the Kremlin disinformation networks and Russian state media—including the dissemination of propaganda through social media.

When it comes to the Russian invasion into Ukraine, “information warfare has been front and center in creating the pretext for this invasion and continues to be a major, major piece of the Kremlin operation,” said Accountable Tech’s Lehrich.