
“The masters are liable to get replaced because as soon as any technique becomes at all stereotyped it becomes possible to devise a system of instruction tables which will enable the electronic computer to do it for itself. They may be unwilling to let their jobs be stolen from them in this way. In that case they would surround the whole of their work with mystery and make excuses, couched in well-chosen gibberish, whenever any dangerous suggestions were made.” – Alan Turing
“Free your mind, and the rest will follow” – En Vouge
In my Great Liberation series (Part I, Part II), I argued that the way AI is being rapidly injected and measured in software testing, a discipline that profoundly shapes modern life, has moved to borderline recklessness. Banking, healthcare, transport, education, and government infrastructure all run on software we need to be reliable and AI adoption is racing ahead at a pace and human judgment can’t keep up.
The reason this matters is because AI is being treated as a substitute for critical thinking and responsibility rather than a tool that demands more of it. Poorly tested systems cause real harm, and when organizations replace human evaluation with automation in the name of speed or cost savings, they increase risks to their business and society.
Despite what you may have heard or read, AI in testing is not neutral or low risk. The dream being sold of closed loop systems validating themselves, with minimal external oversight will make failures more systemic, less visible, and harder to correct. More than ever, we need to be pushing back against the hype cycle selling AI adoption as inevitable, discouraging skepticism and trading on FOMO.
But unfortunately, the software testing industry has failed to meet the challenge.
Continue reading







