You can’t trust your AI enabled test team.
The skilled testing community has long held up Thinking, Fast and Slow as an intellectual foundation to our craft. We’ve talked and written about Kahneman’s classic and how it relates to testing using both System 1; fast, intuitive, often wrong thinking and System 2; slow, deliberate, analytical reasoning to build trust in our processes.
We argued that skilled testing was knowledge work using both systems and requires resisting the easy answer to recognize when intuition is misleading us and to engage in deeper exploration. That approach works in a world where the answers were the ones you trusted because you researched and constructed them yourself.
But we don’t live in that world anymore.
The paper Thinking Fast, Slow, and Artificial: How AI is Reshaping Human Reasoning and the Rise of Cognitive Surrender has introduced System 3; external, algorithmic cognition that produces answers on demand through AI. Not suggestions, but fully formed responses that arrive with a level of sophistication, articulation and confidence that’s hard to ignore.

Tri-System Theory of Cognition
And with the introduction of that third system, the researchers have also identified a new risk for testers: cognitive surrender.
“Unlike cognitive offloading, which is typically strategic and task-specific (e.g., using GPS to navigate), cognitive surrender entails a deeper transfer of agency. Whereas cognitive offloading is a strategic delegation of deliberation, using a tool to aid one’s own reasoning, cognitive surrender is an uncritical abdication of reasoning itself.”
Cognitive surrender in testing is not like the mistake of trusting an automation tool too much, the risk to testers is a behavioral shift, a “transfer of agency” where not only thinking is outsourced to an algorithm, but the entire cognitive process.
“Whereas automation bias focuses on specific errors of omission or commission in response to automated tools, cognitive surrender describes a broader disposition of epistemic dependence. In cases of cognitive surrender, the user does not just follow System 3: they stop deliberative thinking altogether.”
The paper is pretty blunt about what happens next.
In the study, (and frankly in my experience with a lot of tech professionals), people didn’t just use AI for assistance; they followed it almost every time even when it was wrong. In their experiments, the research participants adopted incorrect AI output at very high rates and their performance dropped below what they would have achieved without the assistance.
And ever more dangerous for testing is that alongside that drop in accuracy comes an increase in confidence. People felt more certain about the answers they didn’t generate or verify, which mean the AI model didn’t just influence the outcome, it shaped perception.
I’ve written before about how Testing Shapes Reality, and the toxic combination of lower accuracy with higher confidence turns AI enabled testing into a super charged high-risk enterprise for your business. But it gets worse than that, the research also suggests that these performance drops due to cognitive surrender aren’t evenly distributed. People who already trust AI are more likely to adopt its answers and, in my experience, those same folks don’t naturally engage in deep analysis.
Sound familiar?
Over the last twenty years, long before AI entered the picture, the testing industry has been surrendering to every vendor, agilista, and tool jockey that promised every tester learning how to code would reduce reliance on human judgment. But all that did was dumb down the profession and fill it with teams of people who mistake tests for testing lining up to be willing participants in a complete surrender of our craft to AI.
And this is where the old model breaks down. Kahneman’s framework assumed that when System 1 made a mistake, System 2 could step in, but System 3 (AI) changes that sequence. The answers don’t pass through the crucible of deliberation, they arrive pre-reasoned and most likely already accepted.
“System 3 can replace System 1 by offering confident, ready-made answers that preempt the need for intuitive reasoning” or “suppress System 2 by diminishing the motivation or perceived necessity for reflective thought”.
Which is why adding AI to your already underperforming test team is like test automation on steroids. The more they rely on AI, the more productive they appear but in reality, your testers are adding less value, understanding, and risk management.
So the questions remain for the testing industry and the consumers of our services:
Can you trust a process that produces answers faster than it produces understanding?
Can you trust outcomes that come with high confidence but low scrutiny?
Can you trust a team that stops thinking at exact time thinking matters most?
The future of AI enabled testing won’t be defined by how effectively we use AI, but by how deliberately we resist relying on it and whether we care enough to ensure that thinking still matters.
Discover more from Quality Remarks
Subscribe to get the latest posts sent to your email.