Opinion

The plight of a Kansas newspaper shows the new face of censorship via AI

As more tech behemoths look to Artificial Intelligence to automate tasks, it becomes more apparent that AI is not ready for prime time and poses a threat to human autonomy.

No one can forget the ridiculous fiasco in which Google’s prized AI bot, Gemini, kept spitting out images of black Vikings, female Popes, and changing the ethnicity of every founding father to a person of color.

Now, the Kansas Reflector, a non-profit news operation, is fighting a losing war with Facebook’s AI bot.

Facebook flagged an article about climate change as a security threat, and then blocked the domains of any publication that tried to share or repost the article.

Users who attempted to post the article received auto-generated messages explaining that the content posed a security risk, without further explanation.

Meta, the parent company of Facebook, Instagram, and Threads is unrepentant, and says it has no idea why the Kansas Reflector’s innocuous post was blacklisted by its AI bot, but attributed the problem to likely “machine learning error.”

Green and blue gradient with white text representing the Kansas Reflector's feud with Facebook's AI bot
The Kansas Reflector is in a feud with Facebook’s AI bot. Facebook/Kansas Reflector

Of course there is zero accountability for these errors that result in immeasurable damage to brands and their reputations.

The promise of AI, we hear over and over again, is that it’s a tool to help humans do better, automating tasks to free up worker time for other things. But instead, AI looks far more like HAL 9000 in “2001: A Space Odyssey,” a computer that overtakes its human masters’ ability to control it and turns against humanity.

In Texas, the scoring of the writing portion of the state’s standardized STAAR test has been outsourced to an AI bot, ruffling the feathers of everyone in the Lone Star State.

“The automation is only as good as what is programmed,” said one school district superintendent. A student can easily be downgraded for writing an essay that doesn’t toe the ideological line coded into the system.

It’s clear that ideological line bends markedly in one direction, to the left. We know that not just from the Google Gemini debacle. Regarding the most popular AI chatbot, research from international scholars “indicates a strong and systematic political bias of ChatGPT, which is clearly inclined to the left side of the political spectrum.”

In another bad omen, Clint Watts, the FBI special agent and MSNBC contributor who founded the Hamilton 68 dashboard, has been hired by Microsoft to run their disinformation unit during the upcoming election.

If Hamilton 68 isn’t ringing any bells, it was the left-wing think tank notorious for accusing legitimate right-leaning accounts of being Russian bots. It pushed the false narrative of massive Russian influence that was heavily relied upon by many mainstream media outlets and possibly even swayed the 2020 election.

And now this same guy is in charge of monitoring disinformation for Microsoft during the next election. What possibly could go wrong? Everything.

At the Equal Protection Project (Equalprotect.org), we’ve been screaming about the dangers of AI for over a year, and how bias in the name of anti-bias is being programmed into systems. Behind the scenes and out of sight, AI and social media algorithms can be used to determine what you are allowed to post, what you will be able to read, and ultimately what you will think.

Despite the promises of simplifying workflows and managing tasks, there’s far too much evidence of AI destruction to be ignored.

When it comes to AI, be afraid, be very afraid.

William A. Jacobson is a clinical professor of law at Cornell and founder of the Equal Protection Project, where Kemberlee Kaye is operations and editorial director.