
The following is a post I wrote for the EuroSTAR blog as KPMG UK are going to be at the expo this year up in Scotland…hope to see you there!
Principles Drive Trust in AI
The pace that “artificial intelligence” (AI) is being incorporated into software testing products and services creates immense ethical and technological challenges for an IT industry that’s so far out in front of regulation, they don’t even seem to be playing the same sport.
It’s difficult to keep up with the shifting sands of AI in testing right now, as vendors search for a viable product to sell, and most testing clients I speak to these days haven’t begun incorporating an AI element to their test approach and frankly, the distorted signal coming from the testing business hasn’t helped. What I’m hearing from clients are big concerns around data privacy and security, transparency on models and good evidence, and the ethical issues of using AI in testing.
I’ve spent a good part of my public career in testing talking about risk, how to communicate it to leadership, and what good testing contributes to that process in helping identify threats to your business. So I’m not here to tell you “no” to AI in testing, but talk about how KPMG is trying to manage through the current mania and what we think are the big rocks we need to move to get there with care and at pace.
Continue reading