“The scientist has a lot of experience with ignorance and doubt and uncertainty, and this experience is of very great importance, I think. When a scientist doesn’t know the answer to a problem, he is ignorant. When he has a hunch as to what the result is, he is uncertain. And when he is pretty damn sure of what the result is going to be, he is in some doubt.” – Richard Feynman
I recently had the distinct privilege of watching an expert tester at work. I wouldn’t call this person a “test manager” or “test lead”, even though what they were doing would probably be categorized as an activity associated with both of those roles. No, I would give them the honor of calling them an expert tester – someone using all of their knowledge and skills developed through years of practicing their craft. And they weren’t even testing software; they were testing ideas. Testing assumptions. Testing people. Testing themselves. It was a thing of beauty.
In my almost 20 years of working in this field, I’ve met only a handful of people who could have been dropped into this scenario and come out the other side alive. I didn’t have a scrap of paper to give this person and had not had a single discussion with the project team yet. No preparation. No context. Nothing. Just a room full of COOs, quants, and extremely senior business sponsors all looking for someone who could help them. The elegance of approach and poise demonstrated quickly punctured the tension in the room, and they were free to set about their work – asking excellent questions and not stopping until they were satisfied.
What struck me the most after the meeting, was the feeling of excitement and exhilaration from everyone in the room. Foolishly worrying that I had set everyone up for failure, I had forgotten one of the hallmarks of an expert tester: they are comfortable with complexity and ambiguity. They thrive on it. And they don’t need “best practices” or a two-inch thick manual on how to get the job done. Their weapon of choice is their mind.
And there lies the difference between the prevalent “best practice” and insurgent “skilled testing” communities. In my experience, the people who espouse “best practices” are the ones that do the least amount of learning and practicing their craft. A skilled tester is comfortable with ambiguity and does not hide behind it or disguise meaning through use of nonspecific language. People obsessed with “best practices” look towards quantification as a way of improving numerical efficiency, and use words like “smug” to define those who feel “Not everything that you can measure matters, and not everything that matters can be measured.”
“I know what it means to know something”
This approach is similar to what Nasim Taleb describes as a “ludic fantasy“, or incorrectly applying statistical models where they don’t belong, and I believe leads the “best practices” crowd into state of “survivorship“. In that state, despite a complete lack of evidence and scientific analysis, you want to believe that your strategy and practices were the reason you were successful – simply because you didn’t fail. We are consistently “fooled by randomness” to think that we actually know something, and can apply standards and practices without context or skill and get a better result. Wish thinking is a powerful force.
This has been the prevalent approach to software testing for the last decade or more, and fuels the hordes of consultants and testing vendors, foaming at the mouth to commoditize, package up and volume discount testing “best practices”. But when you start to peel back the layers and get into the details, ambiguity abounds and the models quickly fall apart. The hard work to actually “know something” hasn’t been done. What everyone who understands anything about good testing will tell you, is that it is not the practices themselves that make you successful – it is knowledge of their skillful application.
And that knowledge requires hard work and practice to obtain.
“True ignorance is not the absence of knowledge, but the refusal to acquire it.” – Karl Popper
Nice and clear post. Particular like the “prevalent ”best practice” and insurgent “skilled testing” communities” notion.
Cannot beat that, ideal for testers.
Pingback: Five Blogs – 31 May 2013 | 5blogs
From the relentless and incessant bashing and mocking of so-called “best practices” on twitter and in the context-driven blogosphere I have come to understand that this concept is the enemy. How will I recoginize this demon when I see it?
I have actually never seen the concept properly defined. I guess it refers to something like the ISTQB syllabus, Tmap, maybe even XP and Srumm.
Before I started following the context driven community, I used the term “best practice” more or less synonymously with “heuristic”, “pattern” or “template for doing stuff” in a broad sense. Something to get you started until you know what you are doing. Are ISTQB and Tmap really claiming to be the only way to do software development? You can argue that these methodologies have problems, have the wrong focus, are out-dated etc. But are there really no context where they can provide value?
Thank you for your comment, and I think it highlights one of the points I was trying to make: specificity in language matters. “Best practice” is not synonymous with any of those terms you listed and seems to me it is often used interchangeably by people that benefit from ambiguity. Testers understand and are comfortable with ambiguity, to the extent that one of our main missions is to provide clarity!
We should be continually seeking clarity through questioning our approach, motives, and mission. I don’t believe that either the Tmap or ISTQB folks claim to be the ONLY way to test software, and I haven’t heard anyone from the CDT community claim they are. I would also challenge you to answer your own question with a healthy dose of critical thinking: would you advise someone to use a process that has problems, the wrong focus, is outdated, etc.?
Great article and an interesting concept. As a tester, this is definitely something to aspire to, but therein lies the problem. How can I get there as a tester? I’m carefully going to skirt around “What’s the best way to get there?” for obvious reasons, so I’ll ask a little different. What are the springboards for digging more into these ideas?
One of the qualities I found interesting was “being comfortable with complexity.” What’s the thought process behind this and how do we settle our minds with being comfortable with complexity? I’m interested in being able start thinking along these lines, but I’m really unsure of how to approach this concept. I’d like to see how others approach this concept, maybe not from a procedural approach but from a philosophical one. Any good resources for this?
Second was the notion of testing ideas. This is a very fascinating concept as well that I’ve been hearing quite a bit about over the last couple of years, but again, I’m not sure how to go about shifting my practices to that type of thinking. Again, similar to my previous question, any good philosophical resources for this?
Pingback: Blog Posts of the Week (09/09/2013 - 15/09/2013) | RW Testing