“What we find is that if you have a goal that is very, very far out, and you approach it in little steps, you start to get there faster. Your mind opens up to the possibilities.” Dr. Mae Jemison
Every business wants to be more efficient, but what does that mean in the context of your approach to quality engineering and digital transformation program? The desire for change is greater than ever, with over 80% of companies KPMG recently polled stating they have large transformation programs in place but lack confidence they have the operating model in place to support the required changes. And those transformation plans are only being accelerated since the pandemic! Inefficiencies within and between business operating models or products are great sources to capitalize on unrealized areas for collaboration, reuse, process improvement or reduction in redundancy.
But as testers or those responsible for managing the approach to testing, what and where should we be looking for to increase our value to our business?
Some examples of lenses to view your work that are continually on the optimization radar for enterprise IT are speed to market and your product delivery models. Getting products and services out to your customers at speed through integrated product delivery pipelines and increased automation through increased transparency on risks can help drive out inefficiencies. So how can testing inform the business about opportunities for optimization transformation as well as capital allocation and investment in technology?
Redundancy or Inefficiency? (Dr. heal thyself…)
The first way that testers can look to their work to increase efficiency is at unneeded overlapping effort for similar tasks that are unnecessary or non-productive. Far too often I have reviewed quality engineering or testing approaches that rely heavily on built in redundancy primarily through unprioritized test coverage and large regressive test automation. Paired with the compliment of meetings, reporting, and test management, the delta between what we need and what we are getting is my go to for my first “target rich” environment for inefficiency.
Loads has been written about systems thinking or seeing “the forest for the trees”, but in my opinion, “How Complex Systems Fail” by Richard Cook should be required reading for anyone responsible for managing testing or delivery in enterprise IT. Cooks view that systemic failure requires multiple smaller failures and complex systems contain loads of “known or unknown” latent failures by design. Testers should use system thinking and complexity models to look at risks and opportunities through their knowledge of the F2B flows, customer insights, and risk to remove redundancy and waste from their test approach.
Org Dysfunction – Identification and prevention
One of my favorite books on the sources and effects of org dysfunction is “Managing and Measuring Performance in Organizations” by Robert Austin. In his book, Austin talks a lot about applying targets to measures and the effect that has on organizations and how models or processes that are inappropriately targeted that drive organizational dysfunction. In my experience, most testing measures and metrics drive dysfunctional behaviors in teams and yet we continue to utilize them: test case counts, pass/fail ratios, test case efficiency. Along with agile velocity, invalid measures undermine our relationships with customers and our business when we “over promise” or use inaccurate understanding of IT processes to make commitments. The insights provided through testing should increase clarity in our delivery capability, not muddy the waters or create data driven distractions.
These are just a couple examples of how your approach to quality engineering and testing can improve or complicate the optimization of your digital transformation program. Testing reveals loads of detailed, in-depth information about delivery processes, waste, your business operations, and how you allocate capital, and all you have to do is OBSERVE this information and report on it!