Will AI Make It Easier To Limit Press Freedom?

March 13, 2024
WatchDog Opinion banner
Artificial intelligence is already helping to undermine press freedom, argues WatchDog Opinion, in large part by supercharging existing forces that are harmful to journalism. Photo illustration: Freepik.

WatchDog Opinion: Will AI Make It Easier To Limit Press Freedom?

By Joseph A. Davis

Journalists have been trying to cope with changing technology since — the invention of the internet … of television …. of the printing press? But now the rising ballyhoo about artificial intelligence presents a new set of challenges. And there’s a big one: Will AI bring supercensorship?

Just for the record, the WatchDog views much of what’s written about AI as hype. While news media owners fear that AI will steal their copyrighted content, for many journalists the big fear is that AI will learn how to write their stories and put them out of a job. But we observe that many, many journalists are managing to lose jobs without any help from AI.

 

What actually constitutes AI

is a huge question. It is not

one thing but a collection

of many technologies.

 

What actually constitutes AI is a huge question. It is not one thing but a collection (and orchestration) of many technologies. It is not some magic thing recently invented by OpenAI but rather a decadeslong evolution of these technologies.

We don’t want to get hung up on defining it. What we want is to avoid overgeneralizations based on fuzzy understanding. Sometimes people say AI when they mean computers or networks or technology — and those terms can be fuzzy too.

The bottom line is that AI is already helping to undermine press freedom. But it may not be the main villain. It is, in any case, helping the many existing forces hostile or harmful to press freedom to do their work.

 

Controlling the language

“Generative AI poses the biggest threat to press freedom in decades, and journalists should act quickly to organize themselves and radically reshape its power to produce news,” write researchers Mike Ananny and Jake Karr in NiemanLab.

“A truly free press controls its language from start to finish,” they add. “It knows where its words come from, how to choose and defend them, and the power that comes from using them on the public’s behalf.”

 

When AI bots get it wrong,

their Silicon Valley keepers

call it a “hallucination” and

watch their stock prices rise.

 

When journalists get the facts wrong, they are called errors (which they’re expected to correct), and their editors will scold and maybe fire them. But when AI bots get it wrong, their Silicon Valley keepers call it a “hallucination” and watch their stock prices rise.

Another deep look at AI’s dangers comes from Freedom House, a nonpartisan nonprofit founded in 1941 to fight fascism. Its major 2023 report, entitled “The Repressive Power of Artificial Intelligence,” found that global internet freedom was steadily declining.

Among the findings: “Generative artificial intelligence (AI) threatens to supercharge online disinformation campaigns.” And worse: “AI has allowed governments to enhance and refine their online censorship.”

 

Making lies seem like truth

The threat of AI to press freedom is easily ignored among all the other impacts it brings to journalism and publishing. Some are positive. Used right, AI could be an aid to fact-checking, targeted advertising or audience development.

But one ominous factoid from Freedom House: Dozens of countries are using AI to distort information online for political ends. It’s no hallucination.

This happens on the environmental beat too. Only a day after the August 2023 wildfire incinerated Lahaina in Hawaii, Chinese operatives mounted an online influence campaign to convince the world that the fires had been caused by a “weather weapon” from U.S. intelligence agencies.

The AI used by the Chinese hackers made the claim appear more legitimate. It’s not true, as WatchDog has already noted, but alarming.

 

Journalists have coped with

lies for decades. What’s new

and dangerous is the ability of

AI to make lies seem like truth.

 

And it’s not just the lies. Journalists have coped with lies for decades. What’s new and dangerous is the ability of AI to make lies seem like truth. AI can (if unchallenged) practically erase any distinction between falsehood and truth, making it very hard for journalists to do their job of truthtelling.

It’s also essential to understand that AI’s threat to journalism is amplified and effectuated by other current media trends. For example: concentrated media ownership. When a big company that owns a large fleet of media outlets takes up AI-generated falsehoods, the harm is multiplied.

Another force multiplier is the rapidly evolving landscape of social media platforms; when a few of these become dominant, their ability to magnify disinformation increases. When they allow anonymous posting and “moderation” by computers, it gets worse. Social media allows disinformation to propagate exponentially, or virally. (Looking at you, Elon.)

 

Transparency is the missing ingredient

So it’s not just AI, but computers — or not just computers, but what large networks of computers have become. Anonymity is easier on the internet, and authenticity is elusive on many fronts.

From a journalist’s perspective, the lacking magic ingredient is transparency. An algorithm is just an algorithm, and it may not take an MIT degree to engineer one. What’s almost always missing is the ability of users to see what the algorithm is. Those banks of servers that constitute the “cloud” are guarded, mysterious and inscrutable. That’s part of what gives AI its mystique.

A million years ago (that’s 10 or 20 in computer years), software was invented that would listen to you dictate and spit out text in response. It cost money but was indispensable to some journalists, whose tendons were sore from banging on keyboards. And it didn’t always work perfectly.

That software is gone in its original form, bought up in the end by Microsoft. But today that knowledge has grown and evolved, and we are calling it a “large language model.” The WatchDog still wishes that transcribing interviews was faster, easier and more accurate.

 

There is hope — or might be,

if journalists stand up

and stick together.

 

There is hope — or might be, if journalists stand up and stick together.

The worldwide alliance called Journalists Without Borders (aka RSF) called on 16 partner journalism groups and came up with a “Paris Charter on AI and Journalism.”

It’s worth paying attention to the top among their 10 principles:

  • Ethics must govern technological choices within the media;
  • Human agency must remain central in editorial decisions; and
  • The media must help society to distinguish between authentic and synthetic content with confidence.

Joseph A. Davis is a freelance writer/editor in Washington, D.C. who has been writing about the environment since 1976. He writes SEJournal Online's TipSheet, Reporter's Toolbox and Issue Backgrounder, and curates SEJ's weekday news headlines service EJToday and @EJTodayNews. Davis also directs SEJ's Freedom of Information Project and writes the WatchDog opinion column.


* From the weekly news magazine SEJournal Online, Vol. 9, No. 11. Content from each new issue of SEJournal Online is available to the public via the SEJournal Online main page. Subscribe to the e-newsletter here. And see past issues of the SEJournal archived here.

SEJ Publication Types: 
Visibility: