AI in communication: use or abuse?
From drafting press releases to analyzing social media mentions or producing reports, artificial intelligence has transformed the work of communications professionals. Its speed, processing power, and ease of use allow almost anyone to write a statement, respond to an interview, or prepare a report in seconds. However—setting aside the crucial ethical debate for a moment—are we putting creativity, message control, and professionalism at risk by relying on it? Moderating, or at least rethinking, the use of AI in PR does not mean slowing down innovation, but rather finding the right balance between automation and professional judgment.
Tasks such as media monitoring have been supported by intelligent bots for some time now—not only to identify coverage, but also to provide fast and accurate sentiment analysis, almost in real time. This enables teams to assess campaign effectiveness and even anticipate reputational crises before they escalate. Yet it is in content generation where AI usage is growing at an unprecedented pace.
Content is the crux of the matter
ChatGPT has marked a turning point for the industry. Newsrooms—whether in media outlets or press offices—are increasingly shaped by a direct, somewhat repetitive style, built on thousands of similar stories already written and on a not-so-human “imagination.” AI undoubtedly brings efficiency, but campaigns that truly connect emotionally with audiences still require intuition, sensitivity, and human insight—qualities AI has yet to fully deliver.
Moreover, blindly trusting AI to write or make decisions can result in messages that lack cultural, political, or social sensitivity—particularly risky in multicultural environments or during crisis situations. Because, just like humans, AI carries biases. These biases do not stem from a brand’s legitimate positioning on certain issues, but rather from the perspectives and limitations of those who train and program the systems, sometimes overlooking critical emotional or symbolic nuances. And in communication, those nuances are everything.
Then there is the risk of dependency. The fact that AI is always available makes it tempting to use it even when it is not truly needed—at the cost of losing writing skills, critical thinking, and professional rigor along the way.
Not rejection, but reflection
The real challenge—and we assume many will agree—is not whether to use AI or not, but how to integrate it strategically, coherently, and ethically. Agencies, for example, can start by clearly defining which tasks AI should support and which it should not (yes to reports, no to speeches). Above all, there must always be human oversight to ensure consistency with brand tone, contextual relevance, and accuracy—essentially, the traditional practice of fact-checking, applied to a new tool.
Artificial intelligence is not a threat to communications professionals, as long as it is used as an ally rather than a replacement. Moderating its use is not a step backward, but a step forward guided by common sense. Because the best ideas—at least for now—do not come from an algorithm.
NOTE: Yes, ChatGPT collaborated in the writing of this article.
