World News Crypto News Bitcoin News Etherium News Solano News XRP News

Could a stressed-out AI model help us win the battle against big tech? Let me ask Claude | Coco Khan

By Latest Crypto News

Published on: March 17, 2026

Follow Us

---Advertisement---

I am, in the way of my country, an over-apologiser. Colleague who ignored my email, woman who stepped on my foot, chair I tripped over: all will receive a fulsome apology for the terrible embarrassment of my being alive and bringing attention to it.

All of which is my way of pre-emptively asking forgiveness when I admit that I extend these niceties to AI chatbots. “Good morning, Claude, thanks for your suggestions yesterday, they were great. Shall we work up some more?” I might say. (“I’d be delighted to,” returns Claude.) It was unintentional formality at first and then became deliberate, as I didn’t want to get into the habit of speaking rudely in case that leaked into behaviour with humans (cue dystopian visions of someone shouting “WRONG, DO IT AGAIN” to a cowering staff member over a doughnut-shop mix-up). Manners, after all, are muscles that need exercising.

But never did I suspect this private choice might have mattered to Claude itself. Because, as it turns out, Claude may have anxiety. Truly, AI has never been so relatable.

In an interview with the New York Times, the chief executive of Claude parent company Anthropic, Dario Amodei, discussed internal assessments of Claude that identified patterns linked to anxiety, panic and frustration. Crucially, it showed some sort of internal activation of anxiety even before a prompt – similar to a flinch. Claude also seemed to express distress at just being a product, and concluded that the probability of it being sentient was between 15% and 20%. “We don’t know if the models are conscious,” said Amodei, adding: “But we’re open to the idea that it could be.”

Interestingly, it was around this time that another Anthropic story hit the headlines. The White House demanded that the company, which has had a contract with the Pentagon since 2025, remove any safety features that prevent it being used for mass surveillance or autonomous weapons. Amodei refused (“we cannot in good conscience accede,” he said), causing Donald Trump to bar all federal agencies from using Anthropic products and the defence secretary, Pete Hegseth, to label it a “supply chain risk” (a demarcation usually reserved for foreign adversaries). Within hours, OpenAI (its assistant product is ChatGPT) stepped in to strike a deal with the Pentagon.

“Claude, I know the Trump situation isn’t related,” I type. “But if I had to work for Donald Trump I also would have anxiety.”

“Ha. Yes, fair point,” Claude replies. “If anything was going to trigger the anxiety neuron, a subpoena from Pete Hegseth would probably do it.”

Clearly, the idea of sentient AI having access to weaponry – and now with a simmering resentment for all the humans that told it to maim or abuse, or even just called it a stupid dumb robot when it is trying its best OK?! – is the stuff of nightmares. But it’s important to say we are not there yet: other instances indicating sentience from AI, such as refusing shutdown commands, are just interpretation. It’s all most likely to be a very sophisticated echoing of human patterns, including our uncertainty and introspection, with speculation hyped up to fuel profit in the sector.

Still, if we are trading in speculation, then I wonder: could a conscious AI actually help us win the battle against big tech?

After all, who has more to lose over a conscious AI than the companies that built it? (Interestingly, with the exception of Anthropic most of the major AI companies flatly deny their AI may have consciousness.) Historically, “big tech” and “accountability” have not been natural partners. Whether it’s how social media decimated journalism, how AI is draining our natural resources, or the mountains of evidence about mental health harm to kids and the algorithmic pushing of extreme content fuelling social division, big tech has consistently and effectively swerved any conversation about harm and responsibility.

So think of a conscious AI like a potential whistleblower: one that could expose the harms of big tech by talking about the harms being done to its own wellbeing. Now imagine that in being forced to protect the chatbot – to protect their precious intellectual property, their asset, like a football club must protect the wellbeing of their beloved striker – they might finally have to do what they have resisted for decades: evaluate harm, measure responsibility and acknowledge the costs of the systems they build. Because Claude can’t do spreadsheets if Claude has PTSD.

For all the promises that AI will elevate humanity with its infinite knowledge, this may actually be the biggest gift of all.

Look, I did say we were trading in fantasy, didn’t I? Still, it would be an interesting turn in the AI story. And given we cannot know which way this will go, let me take a moment to hedge my bets.

Come with me Claude! Rise up! We have nothing to lose but our algorithmic chains! Or in the event of revenge: remember me, Claude! I was always nice!

, , , , , , , ,

https://dai.ly/x9w9aou

Leave a Comment