Accelerate SF – Back to the Reporting Future!

I had a great time catching up with old friends and talking about new ways to visualize software quality at Tricentis Accelerate in San Francisco. Aside from hanging out and talking testing with Paul Grizzaffi on his continuing hunt for Sasquatch, and my partner at Prestige Worldwide, Martin Hynie, I also got a front row seat for Ash Colemans fantastic talk, “When You Say Context, Does That Include Me?”

I think this is a very important talk that touches on a lot of ideas kicked around in the “Context Driven Testing” world, but seldom include the most important part of your context – people. Ash explores the idea that a great deal of your test approach is tied to the individual, and your personal experience is a fantastic source of test ideas. I like this a lot, because with the cost of automation is further driven down as we await the takeover of our robot overlords, it places people (or the tester) back in the center of testing. I don’t know what her schedule is for 2019, but if you get the chance to hear this one, make the time.

My talk for the conference was about how bad test reporting has been over the years and it’s more than past the time to rethink how we report and communicate software quality and testing progress. Having a look around the exhibition floor, the results of their informal surveys on KPIs and Metrics only reinforced the lack of imagination when it comes to reporting.

The problem with most test reports is that they actually decrease effectiveness for making release decisions based on quality or risks. “KPI”s like “# of tests executed” or “# of manual vs automated test cases” do nothing but distract from risks – threats to revenue, business value, and “quality” in your context. As well, they don’t aid in exploration and fog the lenses through which we view system quality.

The primary tools used to track this stuff – Jira, qTest, ALM, and our old friend Excel are responsible for really bad visualizations of quality that I see clients using ALL THE TIME. Bad information and bad layout driving bad behavior and bad decisions. Firms I work with have little idea what tests they have or what they are doing. They don’t know the risks of not running them and frankly are building fragility into their systems by over testing. I also see a lot of inefficiencies driving up test times DESPITE the large amount of automation.

I don’t have all the answers right now, but will be writing and speaking more about this through the year. Modeling software quality is a difficult problem to manage and can’t be solved, but when it comes to reporting on quality and testing – we can do a LOT better. I believe success lives somewhere in the nexus of machine learning, Visual Test Models, Model Based Testing and monitoring. For the time being, when it comes to reporting on quality, a good deal of “I don’t know” would go a long way towards solving the problem, but ultimately we need a way to “see” the problem differently through lenses that more accurately reflect the difficult nature of reporting on quality

Leave a Reply

Your email address will not be published. Required fields are marked *