“Performance measurement – the most powerful inhibitor to quality and productivity in the Western world” – Robert D. Austin
I’ve reviewed a lot of test teams in the Enterprise Tech world, and I get asked a lot how I go about looking for problems or how to identify opportunities to get more value out of your approach. The following is an attempt to sketch out the model I use to frame those reviews and look for areas for deeper investigation. All models are wrong, but I’ve found this one to be useful, so hopefully you can get some benefit from it too. But first some context…
“No amount of process improvement is going to solve your underlying problem: org dysfunction”
To paraphrase Jerry Weinberg’s Secret of Consulting, “there is always a problem and the problem is always people”. People are the biggest contributor to your context and guess what, they don’t always do what management TELLS them to do, but they frequently do what management SHOWS them what to do. The greatest determining factor in the success of your test approach is how people are comped. People do what they “perceive” the are being compensated for (salary, bonuses, attention, perks, etc.), so my advice if you want to improve your test approach, is start with why tester think they’re getting comped and look for misalignment to your strategy.
I always like to start my reviews trying to understand how the testing team or testers approach their work and how deeply they think about it. Behaviors that contribute to a general malaise in your testing team are orgs constantly questioning the value of the testing. The symptoms of teams who have a shallow view of their value are common in outsourced/offshore testing teams (or the dreaded, Testing Center of Excellence!).
Questions I like dig into with teams that have a perceived low value from their business:
- Can they answer questions about the business/management?
- Do they only think about what’s directly in front of them – apps/tech?
- Can they model their user base – demonstrate any interactional expertise?
- Do they study the testing industry? Trends, conferences, social media – how are they learning their craft?
These teams will be doing the job but just, and typically suffer from all the classics – slow, inefficient, no innovation, lots of buzzword bingo but don’t really understand the business.
In my experience, testing teams that exhibit issues with their process are typically high on fear and low on trust. Unfortunately, I see these symptoms deeply rooted in philosophy and for an organization, they are frequently terminal. Heavy use of FBM (Fear Based Management) techniques like metric scorecards, process flows, swim lane diagrams are prevalent in these operations. Despite all the advances in our ability to understand manage knowledge work, every time I think this approach is dead – I find another company doing it! Hallmarks of testing operations (other than having really new or really old testers) that have trust issues include:
- Quality police promoting a QA/QC divide
- Test management offices who “manage the managers” or just color in charts/graphs
- Negative reactions to agile transitions for “testing services”
- Prevalence of “wish thinking” – wanting testing to be just about control/confidence
Tester that are obsessed with counting things or have an over reliance on numbers who can’t tell a story through their reporting typically treat testing as a role and not an activity, which can lead to very low trust and value outside of the team.
The last area I like to dig into to get a feel for the value proposition of a test team is around their use of technology as it relates to their test approach. I’ve long held that there is a segment of the professional testing population that don’t actually like testing! This one can be hard to fish out as testers who don’t like testing are often super enthusiastic about what they do. They often call themselves “embedded testers”, “SDET”s or ”automated testers”. This problem is very common these days due to the current coding obsessed iteration of “let’s get rid of all our testers”, but primarily because great testing is REALLY hard but fake TESTING is really easy. Signs you might have a problem with your test teams approach to technology include:
- Tool fetishes or tool first approach which views testing as a strictly technological problem to solve
- Your testers can’t defend their work or explain the mission of their tests
- Their only approach to deciding what to automated is asking “can” and not “why”
Another good sign your testers might have a very shallow view of quality and testing is a dislike for SEMANTIC arguments, or as Dr. Feynman would call it, the “pleasure of finding things out”. IMO semantic discussions are NOT about micro word corrections but coming to an agreed understanding about what we MEAN with the words we use. Hopefully this gives you some behaviors to look for and some questions to ask to try to unlock more value from your test team and approach. Happy hunting!