After a long career of reviewing the various outputs from #softwaretesting: strategies, plans, test cases, #testautomation, etc. I’ve never understood why people think conformance to internal/external standards will affect a better outcome. I realise it’s born out of a fundamental misunderstanding of what happens when you test something paired with strong wish thinking that software testing is analogous to manufacturing. But IME, content is key for #testing artefacts and frankly, I’ve been around long enough to see how all these standards usually give a false sense of security and year after year add to the “#softwarequalitymanagement” certification grift.
So I wasn’t that surprised to witness another leg of the race to the #artificialintelligence bottom in testing with talk of creating #AI to check your test strategies, etc for deviations from your internal or external standards. Apparently, all this investment in #ML and #AI is going to be used as some really, really, expensive rube goldberg machine to #automate the lowest value work in testing!
Some day we’ll get something useful for testing from artificial intelligence, but today is not that day…
(old man shouts at cloud rant over 😉)