
Despite what you may have heard or read, AI in testing is not neutral or low risk. The fever dreams being sold of closed loop systems validating themselves, with minimal external oversight will make failures more systemic, less visible, and harder to correct.
More than ever, we need to be pushing back against the hype cycle selling AI adoption as inevitable, discouraging skepticism and trading on FOMO. We’ll explore:
- Metric validity and self-validating systems creating systemic, less visible failures
- How AI-augmented testing can amplify both competitive advantage and regulatory exposure
- The moral hazard of “closed loop” automation without human accountability and the myth of “ethical AI”
- A path forward for the testing business built on integrity, risk management, and principled leadership
Join me in Germany at Agile Testing Days 2026 as I try to separate fact from fiction, snake oil from serious work, and unwind all the hype so we can set a course grounded in reality that puts people back into focus. Hope to see you there!

Discover more from Quality Remarks
Subscribe to get the latest posts sent to your email.