I am in the field.
I find it nauseating to observe people who know zilch about how much benefits machine learning has bought and could bring and these same people go gaga over LLMs. It's a sad state for a field with so much potential.
Secondly, this paper hits the nail. For decades till today, non-human animal intelligence has been dismissed because of anthropomorphism (while the current AI/LLM Ponzi scheme is built on again this misguided anthropomorphism and god knows how much financial damage it will cause when the marker crashes).
Here's my take:
- Humans and AI face the opposite "Qualia" problem. Human qualia problems are easy for AI to bullshit and explain (and sometimes non-bullshit with good explanation) and conversely, AI qualia problems are almost always extremely easy for humans.
- Stop fucking talking about AGI. There hasn't been even a dent in cyberattacks based on AI. If there is AGI coming, trust me - our bank accounts to everything in our digital life will be in jeopardy. We will not only know but it will hit hard like a tsunami in every aspect of your life.
The UK AI Security Institute1 published a new paper: “Lessons from a Chimp: AI ‘Scheming’ and the Quest for Ape Language.” It criticizes the “recent research that asks whether AI systems may be developing a capacity for scheming.”
“Scheming” means strategically pursuing misaligned goals. These “deceptive alignment” studies examine, for example, strategic deception, alignment faking, and power seeking.
The team, which consists of a dozen AI safety researchers, warns that recent AI ‘scheming’ claims are based on flawed evidence.
The paper identifies four methodological flaws in studies conducted by Anthropic, MTER, Apollo Research, and others:
1. Overreliance on striking anecdotes.
2. Lack of hypotheses or control conditions.
3. Insufficient or shifting theoretical definitions.
4. Invoking mentalistic language that is unsupported by data.
Accordingly, these are AISI’s conclusions:
“We call researchers studying AI ‘scheming’ to minimise their reliance on anecdotes, design research with appropriate control conditions, articulate theories more clearly, and avoid unwarranted mentalistic language.”
The AISI researchers drew a historic parallel to previous excitement about “the linguistic ability of non-human species.” “The story of the ape language research of the 1960s and 1970s is a salutary tale of how science can go awry.”
“There are lessons to be learned from that historical research endeavour, which was characterised by an overattribution of human traits to other agents, an excessive reliance on anecdote and descriptive analysis, and a failure to articulate a strong theoretical framework for the research.”
“Many of the same problems plague research into AI ‘scheming’ today,” stated Christopher Summerfield, AISI Research Director, when he posted the article (on July 9, 2025).
Broader lesson: Non-human intelligence (biological or artificial) requires extra-strong evidence, not extra-lax standards.
[---]
“Most AI safety researchers are motivated by genuine concern about the impact of powerful AI on society. Humans often show confirmation biases or motivated reasoning, and so concerned researchers may be naturally prone to over-interpret in favour of ‘rogue’ AI behaviours. The papers making these claims are mostly (but not exclusively) written by a small set of overlapping authors who are all part of a tight-knit community who have argued that artificial general intelligence (AGI) and artificial superintelligence (ASI) are a near-term possibility. Thus, there is an ever-present risk of researcher bias and ‘groupthink’ when discussing this issue.”