| TL;DR - Key takeaways:
And most importantly: don’t wait for perfection. In AI-driven crises, early clarity beats perfect messaging. |
Artificial intelligence is now part of the media ecosystem whether we like it or not.
Search engines summarise stories before readers click through. Chatbots answer questions about your company using whatever information they can find. And generative tools produce financial briefings, draft social posts, and pull “key facts” from your newsroom in seconds.
And that’s great - convenience is VERY convenient. But it also creates a new layer of risk.
If AI pulls outdated data, blends your narrative with a competitor’s, fabricates a quote, or amplifies a deepfake, the correction cycle is different from traditional media errors. There is no single journalist to call. No clear publication to request a correction from. You are dealing with distributed outputs across platforms that learn exponentially from what is already online.
A 2024 survey on generative AI in PR found that roughly 75 percent of professionals using AI worry that newer practitioners may rely too heavily on automated tools and lose core communications judgment. The same research showed only about one in five agencies consistently disclose their use of AI to clients, which points to a growing transparency gap.
In other words, the industry is still working out how to use these tools responsibly. Meanwhile, they are already shaping public perception.
So the question is not whether AI will affect your crisis response. It will. The question is whether you have a playbook for when it goes wrong.
AI in public relations is here. It’s not going anywhere.
Teams are already using AI for:
But the bigger (and more turbulent) shift is external.
AI systems now:
That last point is important.
If AI systems draw on fake, unreliable or outdated information, they can produce outputs that sound confident but are incorrect. That risk has been repeatedly flagged in industry discussions about generative AI and communications.
This means your owned media is now your most important source material, not just a distribution channel.
And during a crisis, timing matters. If AI generates summaries before your official statement goes live, it can lock in a narrative before you have framed it yourself.
Which brings us to the scenarios you need to plan for.
This is one of the most common issues.
A chatbot states that your company was involved in a controversy that never happened. A search AI pulls a statistic from a ten-year-old blog post. An executive profile contains outdated role information.
It may seem minor. It’s not.
AI outputs often get screenshotted and shared internally or externally. They influence journalists, investors, candidates, and partners.
If the wrong information is online, AI will likely reuse it. And this can create a snowball effect. One model cites it, another picks it up, and suddenly, outdated facts start showing up everywhere.
This is more subtle, and more difficult to deal with.
An AI summary blends your company with a competitor’s positioning. It assigns their sustainability claims to you. It mixes up product lines. It merges reputational histories.
This usually happens when companies operate in similar categories and share keywords.
Repetition hardens perception.
Deepfakes are no longer theoretical.
In 2024, several companies reported incidents where fraudsters used AI-generated voice or video to impersonate executives in financial scams. Political deepfakes also surged globally, especially around elections. And this isn’t fringe anymore: according to Jumio’s Global Identity Survey 2024, 60% of consumers encountered a deepfake video in the past year alone.
We see this play out in the recent Netflix film G20, which turns deepfakes into a full-blown geopolitical action spectacle. It’s wildly over the top (and not exactly ‘good’), but uncomfortably close to the impersonation scams comms teams are already dealing with in real life.
If a manipulated video of your CEO appears online, making inflammatory remarks, you do not have the luxury of a slow response.
Clarity stabilises the situation.
This is the scenario most teams underestimate.
A crisis breaks. Social chatter accelerates. Search AI produces a summary based on early reporting or speculation. Your official statement is still in draft.
The narrative gap is filled without you.
Speed now influences machine interpretation.
Fabricated quotes spread quickly, especially if they confirm existing biases.
AI tools can generate plausible executive statements that look real. Screenshots circulate. Context disappears.
Silence allows fiction to become fact.
Frameworks only help if they translate into action.
Presspage’s Crisis Lifecycle Framework breaks response into six stages:
Sense → Frame → Manage → Act → Recover → Improve
The framework is simple on purpose. It gives teams a shared language when things move fast.
Here’s how that actually applies to the scenarios you’ve just read.
This is your early-warning system. In an AI context, “sense” means monitoring more than just news coverage.
You should be actively checking:
Practical move:
Add AI platforms to your regular crisis monitoring routine. If you're only keeping tabs on traditional media, you’re already late.
This is where most teams fall behind. Framing means publishing your version of events before speculation solidifies.
We’re not talking about a perfect statement, but a factual anchor.
That can be:
Practical move:
Create pre-approved crisis templates now. The middle of a live incident is not the moment to debate commas with legal.
This is simple internal coordination. You’re getting comms, legal, security, leadership, HR, and customer-facing teams on the same page.
That includes:
Practical move:
Treat your newsroom as the single source of truth. Everything else should point back to it.
Now we take things externally. This includes:
Practical move:
When correcting AI errors, quote the false claim directly and replace it with verified facts. Ambiguous denials don’t travel as far as specific corrections.
Once the immediate noise settles, it’s time to assess damage.
Look at:
Practical move:
Run a post-crisis audit on your owned content. What gaps allowed misinformation to spread?
This is where resilience is built.
Update:
Practical move:
Turn every AI incident into a checklist update. Today’s edge case becomes tomorrow’s standard scenario.
AI crisis management is the practice of handling misinformation, impersonation, fabricated content, and automated summaries generated by AI systems during a reputational incident.
It extends traditional crisis comms by accounting for machine-generated narratives.
There’s often no single publisher to contact, misinformation spreads faster, and AI systems may continue repeating errors even after corrections are issued.
Speed and owned media become much more important.
You start by correcting the original source, then publishing clear, structured content on your own channels. AI tools learn from what’s already online.
You can’t control every output, but you can influence what machines learn from.
Verify internally, publish a factual statement quickly, and centralise updates in your brand newsroom.
Avoid emotional responses. Stick to evidence.
Yes. Newsrooms, executive bios, FAQs, and official statements are primary source material for many AI systems.
If your owned content is outdated or fragmented, AI fills in the gaps.
AI hasn’t (and won’t) replaced the role PR plays, but it has raised the bar.
Your newsroom now feeds machines as much as journalists, and your response speed shapes automated summaries. Structure is now very influential when it comes to how narratives are interpreted.
The fundamentals still apply, though: clarity, consistency, credibility…on steroids!
We’re hosting a live Crisis Simulator webinar where you’ll walk through a real-time AI-driven crisis scenario and make decisions as events unfold. It’s practical, slightly uncomfortable (in a good way), and designed for comms teams dealing with exactly the situations described in this article.
👉 Join the Crisis Simulator: How to manage a deepfake scandal in real time