“The scientist has a lot of experience with ignorance and doubt and uncertainty, and this experience is of very great importance, I think. When a scientist doesn’t know the answer to a problem, he is ignorant. When he has a hunch as to what the result is, he is uncertain. And when he is pretty damn sure of what the result is going to be, he is in some doubt.” – Richard Feynman
I recently had the distinct privilege of watching an expert tester at work. I wouldn’t call this person a “test manager” or “test lead”, even though what they were doing would probably be categorized as an activity associated with both of those roles. No, I would give them the honor of calling them an expert tester - someone using all of their knowledge and skills developed through years of practicing their craft. And they weren’t even testing software; they were testing ideas. Testing assumptions. Testing people. Testing themselves. It was a thing of beauty.
In my almost 20 years of working in this field, I’ve met only a handful of people who could have been dropped into this scenario and come out the other side alive. I didn’t have a scrap of paper to give this person and had not had a single discussion with the project team yet. No preparation. No context. Nothing. Just a room full of COOs, quants, and extremely senior business sponsors all looking for someone who could help them. The elegance of approach and poise demonstrated quickly punctured the tension in the room, and they were free to set about their work – asking excellent questions and not stopping until they were satisfied.
What struck me the most after the meeting, was the feeling of excitement and exhilaration from everyone in the room. Foolishly worrying that I had set everyone up for failure, I had forgotten one of the hallmarks of an expert tester: they are comfortable with complexity and ambiguity. They thrive on it. And they don’t need “best practices” or a two-inch thick manual on how to get the job done. Their weapon of choice is their mind.
And there lies the difference between the prevalent “best practice” and insurgent “skilled testing” communities. In my experience, the people who espouse “best practices” are the ones that do the least amount of learning and practicing their craft. A skilled tester is comfortable with ambiguity and does not hide behind it or disguise meaning through use of nonspecific language. People obsessed with “best practices” look towards quantification as a way of improving numerical efficiency, and use words like “smug” to define those who feel “Not everything that you can measure matters, and not everything that matters can be measured.”
“I know what it means to know something”
This approach is similar to what Nasim Taleb describes as a “ludic fantasy“, or incorrectly applying statistical models where they don’t belong, and I believe leads the “best practices” crowd into state of “survivorship“. In that state, despite a complete lack of evidence and scientific analysis, you want to believe that your strategy and practices were the reason you were successful – simply because you didn’t fail. We are consistently “fooled by randomness” to think that we actually know something, and can apply standards and practices without context or skill and get a better result. Wish thinking is a powerful force.
This has been the prevalent approach to software testing for the last decade or more, and fuels the hordes of consultants and testing vendors, foaming at the mouth to commoditize, package up and volume discount testing “best practices”. But when you start to peel back the layers and get into the details, ambiguity abounds and the models quickly fall apart. The hard work to actually “know something” hasn’t been done. What everyone who understands anything about good testing will tell you, is that it is not the practices themselves that make you successful - it is knowledge of their skillful application.
And that knowledge requires hard work and practice to obtain.
“True ignorance is not the absence of knowledge, but the refusal to acquire it.” - Karl Popper