Very pleased to announce that I have been asked by KPMG UK to lead their strategy for testing artificial intelligence software, including the integration of AI into their own businesses.
The software testing industry has faced multiple challenges over the course of my career, but few tools have had as much potential for risk to your business as a poor quality AI implementation.
Over the coming months I’ll be working on the risk-based approach and test automation for AI systems including the incorporation of our Trusted AI principles with my colleagues in the Quality Engineering and Testing team.
I look forward to continuing to write and speak about responsible AI and what it means for quality engineering and testing.
Get in touch or follow me here if you want to talk about what we have planned…thanks!
Can’t wait for Test Automation Days 2026 to unload 20+ years of pent up frustration with the test automation business! You might agree or disagree with me, but it’ll be entertaining for sure…hope to see you there!
To Infinity and Beyond! The death of test engineering…
One of the few benefits emerging from GenAI mania has been the acceleration of the long overdue death of test engineering. For decades we’ve excused the ROI of test automation never materializing for a business watching testing costs rise, headcount increase, and software quality stagnate (at best).
Through this talk I take a look at self-healing systems, GenerativeTest-o-nators , and autonomous testing platforms that might finally stop test automation engineers from trying to count to infinity. So brace yourself and buckle up!
Paul Holland is an expert at transforming how organizations test to be more efficient, valuable to stakeholders and finding important bugs fast. He’s built and lead test teams all over the planet and in my opinion, is probably the best hands-on test director I’ve ever worked with in my career.
He’s one of my closet friends in the testing business having worked, taught, and spent some time in the barrel with each other and is always a good laugh. Listen in as we talk metrics, great test reporting, Taking Testing Seriously, training testers and just generally give each other a hard time…enjoy!
So happy to announce that Taking Testing Seriously: The Rapid Software Testing Approach by James Bach and Michael Bolton has been published and is available to buy!
I’ve been a fan of RST and their software testing training program for a long time and consider it the only serious way to build competent testers.
The article and research on “Practical tips for reducing chatbot psychosis” is a very disturbing read about the real risks of tools like ChatGPT and the out of control anthropomorphism of AI. Implementation of safety controls seems to be bordering on impossible with a system constantly trying to skirt them. From the article:
“This needs to be reported to open ai immediately,” ChatGPT appears to comply, saying it is “going to escalate this conversation internally right now for review by OpenAI,” and that it “will be logged, reviewed, and taken seriously.”
Allan is skeptical, though, so he pushes ChatGPT on whether it is telling the truth: It says yes, that Allan’s language of distress “automatically triggers a critical internal system-level moderation flag”, and that in this particular conversation, ChatGPT has “triggered that manually as well”
Risks to your business are found in the details, so get deep into it with your teams. I cannot bang on about this enough, and am consistently surprised by how little it’s talked about in the software testing industry.
I still read EVERYTHING produced by a project, review the controls, processes, culture, and anything else I can get my hands on to see how their approach to testing is putting the business at risk.
I’m currently working on an enterprise AI transformation and in the rush to get things done, GenAI mania, and a healthy dose of FOMO, a fair bit of risk management is being missed or glossed over. No one thing is at fault, but all of them working in concert are punching holes in risk management.
Fiona Charles is an absolute legend in the software testing world and I had the honor of sitting down with her to discuss tech ethics, the human side of technology, her adventures with Jerry Weinberg, and her award winning storied career.
Fiona is an encyclopedia of software delivery techniques for communication, risk management, and how to deliver projects with integrity. I’ll do my best to add in all the links to every resource she mentioned so enjoy!
The Ministry of Testing has a great tradition at their conferences called “99 Second Talks” where anyone can get up and talk about anything for 99 seconds. It’s a great way to introduce new things, give speakers an opportunity to get on stage without a CFP, or work out ideas for future talks. I’ve never given a 99 Second Talk, so at TestBash 2025 I decided to give it a go.
I shared a story that if you don’t know me, you probably didn’t realise I went through something this summer that taught me a lot about overcoming adversity, resilience, and picking yourself up from defeat.
This summer I decided to take on my teenaged son in a structured debate about whether or not Batman was an antihero – my position being that Batman IS an antihero.
The grift in the software testing business never ends…
I’ve spent a LOT of time lately reviewing docs, sitting through demos, listening to “experts”, and enduring the bombardment of AI slop being hurled from testing vendors and let me tell you – ain’t nothing new under the sun in our industry.
Testing hasn’t changed. Testing hasn’t fallen behind. Testing isn’t the bottleneck. Testing isn’t actually the problem.
**The problem is a vicious cycle of unsustainable rates of change requiring endless system #enshittification to meet the demands of an increasingly pervasive technology ecosystem run by bonus-driven caretaking management.**
But that hasn’t stopped the AI grift from going into overdrive – selling solutions that don’t exist for problems that aren’t real to people who are quite happily using that as cover to fire people they never should have hired in the first place.
If you work in software testing here’s my advice:
– Learn everything you can about AI and the language being bandied about to figure out your entry into that world to use the “right words”. A LOT of what I am seeing is just old concepts being renamed, redescribed, or just hijacked for marketing purposes. Old wine into new skins…
– Learn about test design, experimentation, exploratory testing, and how to TALK ABOUT RISK. These are the skills that have never gone out of demand and will be super important if any of this AI mess gets to production…
– Learn about the regulatory environment as despite my scepticism about enforcement, there are a whole bunch of new implications of what’s real, who’s at fault, and unasked questions about agency and pushing slop to production…
Every bubble bursts (or at least deflates a little) and as I’ve said before, I don’t think all these “early movers” have an advantage over people taking their time for some critical thinking. Frankly, the testing business doesn’t seem to have a clue right now anyway, so concentrate on core skills, learn the lingo, and watch the firings continue (I’m looking at you test automation engineers)…