Big congratulations to the KPMG UK Quality Engineering and Testing team for winning the Leading Vendor in Service Delivery and Consulting award at the National Software Testing dinner. A great group of people and very deserving of the recognition by the software testing industry. Congratulations!
“Eliminate the unnecessary so that the necessary may speak.” ― Hans Hofmann
As AI integration FOMO hurtles us towards even more pervasive technology, the testing of AI models for correctness and most importantly, their potential for harm becomes paramount to their success. That testing has to be underpinned by principles and values to guide the observations and reporting, so I was inspired by Maaike Brinkhoff bravely taking on the meaning of “quality engineering”, as well as my multiple conversations with Michael Bolton on similar lines to try to put into words some views I’ve not published in the past.
When I was at university, I had a fantastic art professor Lyle Salmi who really challenged me to think differently about composition, perception, and the creative process. He turned me on to Hans Hofmann and some other abstract artists which only furthered my mild obsession with Jackson Pollock and exploring the constructing of things creatively.
Hans Hofmann wrote in the “Search for the Real” about trying to move beyond imitation and finding truth through expression. At that time in my life, 20th century abstract art was more about representing ideas than directly trying to copy life – art was about the experience.
Very pleased to announce that I have been asked by KPMG UK to lead their strategy for testing artificial intelligence software, including the integration of AI into their own businesses.
The software testing industry has faced multiple challenges over the course of my career, but few tools have had as much potential for risk to your business as a poor quality AI implementation.
Over the coming months I’ll be working on the risk-based approach and test automation for AI systems including the incorporation of our Trusted AI principles with my colleagues in the Quality Engineering and Testing team.
I look forward to continuing to write and speak about responsible AI and what it means for quality engineering and testing.
Get in touch or follow me here if you want to talk about what we have planned…thanks!
Can’t wait for Test Automation Days 2026 to unload 20+ years of pent up frustration with the test automation business! You might agree or disagree with me, but it’ll be entertaining for sure…hope to see you there!
To Infinity and Beyond! The death of test engineering…
One of the few benefits emerging from GenAI mania has been the acceleration of the long overdue death of test engineering. For decades we’ve excused the ROI of test automation never materializing for a business watching testing costs rise, headcount increase, and software quality stagnate (at best).
Through this talk I take a look at self-healing systems, GenerativeTest-o-nators , and autonomous testing platforms that might finally stop test automation engineers from trying to count to infinity. So brace yourself and buckle up!
Paul Holland is an expert at transforming how organizations test to be more efficient, valuable to stakeholders and finding important bugs fast. He’s built and lead test teams all over the planet and in my opinion, is probably the best hands-on test director I’ve ever worked with in my career.
He’s one of my closet friends in the testing business having worked, taught, and spent some time in the barrel with each other and is always a good laugh. Listen in as we talk metrics, great test reporting, Taking Testing Seriously, training testers and just generally give each other a hard time…enjoy!
So happy to announce that Taking Testing Seriously: The Rapid Software Testing Approach by James Bach and Michael Bolton has been published and is available to buy!
I’ve been a fan of RST and their software testing training program for a long time and consider it the only serious way to build competent testers.
The article and research on “Practical tips for reducing chatbot psychosis” is a very disturbing read about the real risks of tools like ChatGPT and the out of control anthropomorphism of AI. Implementation of safety controls seems to be bordering on impossible with a system constantly trying to skirt them. From the article:
“This needs to be reported to open ai immediately,” ChatGPT appears to comply, saying it is “going to escalate this conversation internally right now for review by OpenAI,” and that it “will be logged, reviewed, and taken seriously.”
Allan is skeptical, though, so he pushes ChatGPT on whether it is telling the truth: It says yes, that Allan’s language of distress “automatically triggers a critical internal system-level moderation flag”, and that in this particular conversation, ChatGPT has “triggered that manually as well”
Risks to your business are found in the details, so get deep into it with your teams. I cannot bang on about this enough, and am consistently surprised by how little it’s talked about in the software testing industry.
I still read EVERYTHING produced by a project, review the controls, processes, culture, and anything else I can get my hands on to see how their approach to testing is putting the business at risk.
I’m currently working on an enterprise AI transformation and in the rush to get things done, GenAI mania, and a healthy dose of FOMO, a fair bit of risk management is being missed or glossed over. No one thing is at fault, but all of them working in concert are punching holes in risk management.
Fiona Charles is an absolute legend in the software testing world and I had the honor of sitting down with her to discuss tech ethics, the human side of technology, her adventures with Jerry Weinberg, and her award winning storied career.
Fiona is an encyclopedia of software delivery techniques for communication, risk management, and how to deliver projects with integrity. I’ll do my best to add in all the links to every resource she mentioned so enjoy!