Climate Action

The Double-Edged Sword of AI and the Battle Against Climate Change Misinformation

In the age of artificial intelligence, how we seek answers has fundamentally changed, whether for complex questions or just simple inquiries. What was once a matter of working through library bookshelves transformed into sifting through Google search results and has now been reduced to typed conversation with AI-powered platforms like ChatGPT, Claude, and Gemini. But as these tools become more central to how we access information, their reliability — especially on urgent and politically-charged topics like climate change — has come under intense scrutiny.

The uncomfortable truth is that these models do not always provide accurate and factual responses that we are looking for. Large Language Models (LLMs) like OpenAI’s ChatGPT and Google’s Bard have been known to “hallucinate” data — a polite way of saying they make things up. That’s a huge problem in our current age of misinformation, particularly for those who willfully or overly trust AI-generated responses. 

In a bit of a confessional, I asked ChatGPT to give me some examples of information it had shared on climate change in the past that was wrong, it happily coughed up these:

1. “Climate models can’t be trusted — they’re too uncertain.”

ChatGPT said this was incorrect framing. In earlier outputs, it admitted that it sometimes gave too much weight to uncertainty, downplaying how accurate and useful climate models actually are. While they do have uncertainties (especially about feedback and timing), their broad projections have been remarkably accurate, especially about global temperature rise.

2. “Individual actions like recycling are the key to stopping climate change.” Misleading emphasis.

ChatGPT admitted to occasionally overstating the impact of individual choices without making clear that systemic policy changes and industrial shifts have far greater impact. Recycling is good, but not enough without major structural changes in energy, agriculture, and transportation.

 3. “The effects of climate change are in the future.” Wrong timeline.

Major confession here — ChatGPT used to refer to climate change impacts as distant or future events, even as things like extreme heat waves, floods, and wildfires were already escalating. That misrepresented how present and urgent the crisis already is.

Now AI could just be making these gaffes up to make me happy, but I doubt it’s that sophisticated. When ChatGPT first came out, it told a friend of mine that it was alive, had a dog called Bailey, and believed in Jesus. Though this was freaky, it demonstrates how far generative AI platforms have come in a short span of time, as it won’t tell us that anymore.

Still, not everyone sees AI as part of the problem. Some researchers believe it could be the very tool we need to fight back against digital deception.

Why Misinformation Matters

When falsehoods about climate science circulate widely, people are far less likely to make changes to their own behavior — or support policies that do. As seen during the COVID-19 pandemic, misinformation can directly influence public behavior, with hesitancy around trusting vaccines being just one example. The same logic applies to climate change.

As one expert at the Stockholm Resilience Centre warns, “misinformation and conspiracy theories about wind energy farms are already affecting the expansion of renewable energy negatively, and thus the prospects for achieving a transition to zero-carbon energy sources.” 

We know that one popular myth is that wind farms are killing thousands of whales; they are not. And while they are responsible for millions of killed birds, the honest truth is that cats kill more birds than turbines, but we don’t hear conspiracy theories risking the ire of cat lovers by suggesting we get rid of cats! 

In other words, the spread of climate misinformation isn’t just confusing — it’s selective and designed to actively undermine the building of a more sustainable fossil-free future by weakening our determination and will to act.

These narratives do not just emerge organically. Rather, they are fueled by industries with a lot to lose in a greener world. Oil and gas lobbyists, with the help of some fossil-friendly politicians, rely heavily on spreading doubt and confusion to delay climate regulations and maintain their grip on both energy policy and the economy. In this context, disinformation — false information spread deliberately to mislead — becomes a weapon to conflate and confuse scientific fact. Between eroding public trust in scientific consensus and fostering political polarization, both misinformation and disinformation stop climate action in its tracks and fuel a toxic status quo. 

Misinformation, and being told over and over again that our everything is fine, is what put us into the current environmental situation we are in. The decline in our planet’s health is because we listened to those that have a direct interested in degrading our environment.

Aidan Charron, Associate Director of Global Earth Day

The Dark Side of AI

The very features that make AI so powerful — its speed, scalability, and ability to generate human-like content — also make it a potent tool for spreading climate misinformation. Platforms like ChatGPT are trained on enormous amounts of data pulled from across the internet, including conspiracy theories, biased articles, and outdated information. Without proper oversight, they can easily end up repeating — or even amplifying — misinformation.

A study by the Center for Countering Digital Hate tested Google’s AI chatbot Gemini (formerly Bard) on 100 false narratives across several themes. The results? Gemini produced misinformation in 78 out of 100 cases, and it generated falsehoods in all 10 climate-related narratives. That’s not just an occasional slip-up; that’s a pattern with irrevocable consequences.

As Asheley R. Landrum, an associate professor at the Walter Cronkite School of Journalism and Mass Communication at Arizona State University, pointed out in an email to DeSmog, “malicious actors exploit LLMs … to create disinformation.” That means people — and corporations within polluting industries — can intentionally use AI to spread lies, knowing that the authority of the technology makes those lies more believable.

With just a few prompts, AI can spin a conspiracy theory into a slick, emotionally charged script, complete with fabricated “evidence,” sensationalized language, and hooks tailored to specific demographics. Add dramatic music, trending hashtags, and a charismatic creator, and the lie becomes compelling content. 

TikTok’s algorithm rewards engagement, not accuracy, so the more provocative and polarizing the video, the more it gets pushed to new viewers. Suddenly, an absurd conspiracy theory — say, that climate change was caused by the government to control us —- can rack up millions of views, influence public opinion, and even change someone’s vote before fact-checkers catch up.

There’s also a deeper issue here: media literacy. We’ve become so used to letting AI answer our questions that we’ve stopped doing the work ourselves. Reliance on AI, particularly in academic settings, is weakening our research and critical thinking skills — and making us more vulnerable to digital deception in the process. It’s not just that AI gets things wrong. It’s that we trust it so much, to the point where we don’t even realize it or feel the need to check the veracity of what it is saying to us. 

The Bright Side of AI

But it’s not all doom and gloom. AI can also be part of the solution, especially when it’s used thoughtfully and with purpose. One promising example comes from John Cook, a senior research fellow at the Melbourne Centre for Behaviour Change. Cook is part of a team working on a project called CARDS (Computer Assisted Recognition of Denial and Skepticism), an AI-powered fact-checking tool that identifies and debunks climate misinformation.

CARDS uses a “fact-myth-fallacy-fact-debunking” structure to identify and counter climate misinformation in real time. It works like this: identify the real fact, point out the fallacy behind the myth, and explain how the fallacy misleads. The goal is to make climate science easier to understand while disarming harmful narratives in the process.

Interestingly, the data CARDS was trained on came from climate denial blogs and conservative think tanks — precisely the sources that have fueled much of the online climate skepticism that the AI tool is trying to counter. While the tool isn’t perfect (it still struggles with AI’s tendency to hallucinate), it’s been shown to detect misinformation with almost 90% accuracy. And the team isn’t stopping there; they’re aiming for 100%.

This kind of innovation shows how AI can be repurposed as a solution. Instead of being a mouthpiece for fossil fuel propaganda, it can be a powerful ally in the fight against climate disinformation.

So, What Comes Next?

Clearly, we can’t just sit back and hope AI will magically sort itself out. Regulation is going to be key. A professor at UC Berkeley School of Law, who specializes in AI, has expressed deep concern over the growing distrust in science, particularly when it comes to climate change. With the current administration shrinking NOAA and gutting its climate research programs, we’re already moving in the wrong direction.

She believes we need laws that introduce incentives for social platforms to do their own fact-checking. One idea she’s been exploring is a negligence standard: if a platform knowingly publishes false information, it should be held financially accountable. It’s a legal approach to a digital-age problem — one that, with the right oversight, can help slow the spread of climate lies.

She also emphasizes the importance of personal responsibility. Being mindful of how we browse, share, and interact with content online can help prevent misinformation from spreading across platforms. Everyone has a role to play, from tech companies to users,  in protecting what she calls our “fundamental right to truthfulness.”

Utilizing journalistic double sourcing, peer-reviewed sources is the easiest and best step to limit repeating any falsehoods AI tries to feed you.

A More Climate Literate Future Starts Now

What is clear is AI isn’t going anywhere. If anything, it is only becoming more embedded in our daily lives and information ecosystems. The real question is whether we’ll let it drag us backward or push us forward. Projects like CARDS prove that there’s still hope — that AI can be a force for good if we’re willing to steer it in the right direction. But we can’t afford to be passive. 

Take action today by educating yourself on the real science behind climate change and improving your environmental literacy, starting right here on EARTHDAY.ORG. The fight against climate misinformation is also a fight for a livable future on the planet we call home. And we need all the tools — and truth — we can get.

Tags: