Why AI is Creating a Crisis of Authenticity for UK Leadership

Generative AI (GenAI) is being woven into almost every part of corporate life. It promises efficiency, speed and scale. Yet, as it spreads through boardrooms and back offices, it is quietly undermining the single most valuable asset any organisation holds: its reputation.

A recent analysis by Raconteur highlighted a growing problem on the internal front: employees are losing faith in their leaders. A significant portion of UK workers feel actively misled by AI-generated communications, with many naming Aias a direct cause of “eroding leadership credibility”.

For those of us who work in reputation, this is more than an internal communications headache. Weak governance over how AI is used is fast becoming a full-spectrum corporate reputation risk, touching investor confidence, regulatory exposure and public trust. The question for UK leaders is no longer whether they should use AI, but how they can use it in ways that strengthen, rather than hollow out, their human leadership.

The Danger of AI Speak

The central finding of the Raconteur study is stark: outsourcing empathy creates a crisis of authenticity. Leaders are being elevated for technical skill or commercial track record, then allowed to outsource emotional labour to machines. The result is a wave of bland, synthetic messaging that staff recognise instantly.

When Ai is used for sensitive work such as performance feedback, recognition or difficult news, employees can tell when something has been written by a tool rather than a person who knows them. That gap is felt most sharply in moments that should carry care and nuance. Over time, it creates a pattern of dehumanisation in which people feel unseen and unheard.

Reputation professionals know where that path leads. Low morale, disengagement and rising churn do not stay inside the building. Reviews on employer platforms, informal networks and whistleblowing routes begin to carry a consistent story about culture and leadership. Sooner or later, those stories reach the outside world and start to shape how the organisation is judged.

The takeaway for leaders is simple: authenticity cannot be delegated. Ai can be a very effective drafting assistant, but it should never be allowed to be the final voice on anything that touches human relationships. Every sensitive communication, however it starts, needs to be read, owned and adapted by a human leader who understands the individuals and the context.

Investor and External Stakeholder Scepticism

From an external perspective, Ai is now firmly on the radar as a material financial risk.

A 2025 analysis of SEC filings by The Conference Board found that the proportion of S&P 500 companies disclosing AI as a material business risk increased fivefold between 2023 and 2025, with damage to reputation cited more often than any other AI-related concern.

The PwC 2025 Global Investor Survey paints a similar picture. Investors are enthusiastic about AI in principle: around four in five say they would be more likely to back companies that are pursuing broad AI transformation. At the same time, fewer than two in five feel they receive enough information about how AI is being governed, deployed and controlled. They want evidence that leadership teams understand the trade-offs and have credible guardrails in place.

Alongside this sits a more immediate content risk. Cheap, automated text and imagery have led to a flood of easy-to-produce, low-quality material often described as “AI slop”. In this environment, stakeholders may struggle to separate authentic corporate content from fabrications or AI “hallucinations”.

All it takes is one misstep: a flawed chatbot response on a sensitive topic, a misleading AI-generated summary of results, or an image that implies something the company never intended to say. These incidents travel quickly, often reaching social media long before legal, PR or investor relations teams can respond. In reputational terms, the cost can far exceed any short-term efficiency gain that prompted the use of AI in the first place.

The Regulatory Cliff

In the UK, these reputation risks sit within a tightening legal and regulatory framework built around UK GDPR and, increasingly, the Data (Use and Access) Act 2025.

The transparency and human oversight concerns highlighted in the Raconteur research are not only ethical questions; they also map directly onto legal duties.

Automated decision-making. Under UK GDPR, individuals already have rights related to decisions made solely by automated means, particularly where those decisions have legal or similarly significant effects. The Data (Use and Access) Act 2025 amends and supplements that framework. If leaders rely on opaque AI systems to make or inform decisions about promotions, redundancies or performance ratings, they may struggle to explain the underlying logic in a way that meets these duties. Once that gap is exposed, any legal dispute over unfair treatment will quickly spill into a reputational crisis.

Bias and discrimination. Generative models inherit the characteristics of the data on which they are trained. If that data reflects historical biases, the outputs are likely to do the same. When such tools are used in HR or customer decisioning without proper checks, organisations risk breaching the fairness and accuracy principles at the heart of UK data protection law. Regulatory enforcement in this space almost always comes with a second cost: visible damage to trust.

For UK leadership teams, the message is clear. They are accountable not only for what AI delivers but for how it delivers it, and for the explanations they can provide when it affects people’s lives.

The Reputation Roadmap

To realise the benefits of Ai without undermining trust, leadership needs to move past simple adoption and into a more deliberate “bionic” model: human judgement setting the direction, with AI used as a controlled accelerant rather than an invisible decision-maker.

Three areas now look essential for reputation resilience.

Treat Transparent Governance as a Strategic Asset

Boards should sign off clear, practical guidelines on Aiuse that apply across key functions such as HR, communications and finance. At a minimum, those guidelines should address:

  • Disclosure. Be open when content, analysis or decisions have been assisted by AI, particularly where the audience would reasonably expect a human author.
  • Accountability. Name a senior individual who is ultimately responsible for each important AI-assisted output. Responsibility cannot sit with “the system”.
  • Verification. Require human review for accuracy, tone and impact for all high-stakes communications and decisions. Treat Aias a capable junior colleague whose work must always be checked.

Retrain for the Skills AI Cannot Supply

It is tempting to let AI cover the gaps where leaders feel least confident, especially in communication and people management. In the long run that weakens both leadership and culture.

Instead, organisations should invest in strengthening the human skills that create trust: empathy, listening, emotional intelligence and contextual judgement. Ai can summarise patterns in data or sentiment, but only a human can weigh those patterns against organisational values and the lived experience of employees, customers and communities. Training, coaching and feedback for senior leaders should reflect this, with AI positioned as an analytical support, not a substitute for human presence.

Shape the Inputs That Inform Your Reputation

Public perception is increasingly mediated by AI assistants and summarisation tools that sit between stakeholders and official channels. Customers, journalists and investors may first encounter a company’s position through a short AI-written summary rather than a carefully crafted press release or report.

That shift demands a broader approach to reputation management. Leaders need to:

  • Monitor how AI tools represent their organisation, checking for inaccuracies or outdated narratives.
  • Ensure that accurate, detailed and up-to-date information about their policies and performance is easily accessible in machine-readable formats.
  • Work closely with communications, legal, risk and data teams to correct significant misrepresentations at source where possible.

A Leadership Test, Not a Technology Story

AI will continue to evolve and embed itself into corporate decision-making and communication. The real test for UK leadership is whether they treat it as a shortcut or as a prompt to raise their standards.

Organisations that protect authenticity, invest in human skills and embed ethical oversight into every AI-enabled workflow will be better placed to maintain trust with employees, regulators and investors. Those that do not will find that the reputational cost of beige, automated leadership is far higher than any efficiency it delivers.