The Confidence Game – What is the Mission of Testing?

Doubt is not a pleasant condition, but certainty is absurd. – Voltaire

Maybe it’s due to an extension of my tendency towards skepticism to myself, but I get really uncomfortable telling anyone that something is certain. That is especially true when it comes to software and interpreting the results of testing. There are just too many variables that impact the control and validity of the output, and that’s just limited to what we can know – let alone the things we don’t know! The great “unknown unknowns” loom in the shadows, waiting to rear their head and question our approach and as well – shake our confidence.

By definition, confidence is the quality or state of being certain. It’s knowing that something can be proved true, and is a by-product of actions taken in the process of acquiring that proof. Christopher Chabris and Daniel Simons created a famous experiment in studying inattentional blindness. In their book The Invisible Gorilla, they posit that we should be very unsure of what we are certain we know, and that our confidence or intuition can often mislead us. The idea of questioning the origins of our confidence is also echoed in Blink: The Power of Thinking Without Thinking by Malcolm Gladwell, and Thinking, Fast and Slow by Daniel Kahneman.

So what does that have to do with the mission of testing? It is extremely important that testers understand and adhere to their mission, as to replace it (either willfully or unintentionally) would be directly fogging the headlights on your project. So should the mission of testing be to give confidence? I don’t believe it should. I would agree with my friend Michael Bolton, that making “confidence” your mission in testing is akin to goal displacement, or substituting objectives with those that suit your means as opposed to the end.

I believe the mission of testing is gaining information; but here are some better examples for your reference:

  • Testing is a process of technical investigation, intended to reveal quality-related information about a product (Cem Kaner)
  • Testing is questioning the product in order to evaluate it (James Bach)
  • Gathering information with the intention of informing a decision (Gerald Weinberg)

So what is the problem with making confidence the mission of testing? Shouldn’t we want to have confidence in our products? Isn’t it a good thing to have confidence in our testing? Of course we want confidence in our products and testing, but if you make gaining that confidence your mission, in my opinion, you are intentionally adding confusion to the decision-making process. Aside from trying to hit the bulls-eye on the wrong target, testing for confidence is a slippery slope to ill-informed decisions, misuse of metrics, and a ready candidate for confirmation bias.

Testers should be constantly vigilant against all forms of bias, but especially confirmation bias. Making confidence your mission guarantees you will be seeking information to give your stakeholders certainty – instead of information that should give them pause for thought. Every tester has at times been subject to the “Curse of Cassandra“, or giving a valid warning that is not heeded. But nothing will put you permanently in that place quicker than having things go wrong after you’ve not only provided information to stakeholders – but have made a value judgement on their behalf!

Some may view me as overly skeptical. That’s fine. But I would rather err on the side of caution (and humility) when seeking information for my stakeholders. It’s up to them to decide what to do with what I give them – objectively finding it is hard enough without attempting to gain credibility through inappropriate means. So when someone tells me someone has asked them to give them confidence through testing, my simple advice to them would be this: stick to the mission.

7 thoughts on “The Confidence Game – What is the Mission of Testing?

  1. Interesting post as always. In a previous job we used to have “earned confidence” as part of the reports, as the high risk tests were run and passed the confidence score increased. I was a Jr consultant at the time and it all seemed sensible, are you saying this is not the case ? The main mission was to run the high risk tests first and find the important defects quickly – the ‘confidence’ numbers were a by-product of this, not the main mission. Are these metrics misleading ?

    I’d also have a customer ask for my opinion on the state of the product – he was getting daily reports full of figures of tests passed, reqt coverage, defects closed blah de blah but what he really wanted was a gut check from someone who knew the product inside out ( me ) Should I be giving him my gut feel or not ?

    • My answers are yes and no. Yes those metrics are potentially misleading and can be possibly harmful when they are being used as surrogates for more meaningful information. You would not release based entirely on your “earned confidence” number, and as well, running your high risk tests first does not necessarily mean you will find the important defects quickly. That would be similar to what Nassim Taleb would categorize as a “ludic fallacy” or “the misuse of games to model real-life situations”.

      And as well my answer is no, I don’t think you should be giving him your gut feel unless you outline all the bias, variables, and constraints that are forming your opinion. If the mission of testing is to provide objective information to stakeholders, offering your opinion would technically be out of scope. Now, very often testers are asked their opinion, as they know the system inside and out, are skeptics, and have a pragmatic view of the process that built the product. That’s a very valuable perspective but I’ve heard it said that NOT giving your opinion is the hardest testing skill to master.

  2. The information you provide me through your testing enables me to make informed decisions. This makes me feel more confident about my decions. Your testing gives me confidence! No? 🙂

  3. I believe testing missions are a lot more specific in reality (and hopefully they are communicated.)
    Some of them could include building confidence, perhaps in implicit missions like “testers should try to destroy the confidence of the product (and if they fail, confidence is built) or “testers add quality to the information of the product, building confidence to decisions”.
    Or even better, with some details that are useful: “We want to be confident that the server can handle standard load and rush hour load as defined in TrafficInvestigation.pdf”.

    Sometimes “the mission of testing is to provide objective information to stakeholders”, and sometimes the specific mission also includes things like “besides accurate objective observation summaries, we want to hear your subjective opinions if the product seem to meet also our vague non-metric quality objectives.”

    Of course we should be aware of confirmation bias, and the risk of saying what they want to hear; but I would still like stakeholders to express what they want (so I can give them what they need.)
    And if a word is ambiguous, I hope I have the chance to ask: “what do you mean with confidence?”

    • Rikard (and Saam),
      The information a tester provides should of course be accurate or at least not misleading, but that’s implied no matter what the mission statement is in my mind. Stakeholders should feel confident in the quality of the information, but the testing mission in itself can’t be about building confidence without having confirmation bias (and probably other biases as well) enter into the picture.

      The mission example you gave of stakeholders wanting to feel confident that the system could handle a certain load sounds like a design mission to me. From a testing point of view, the mission would translate to “Gather information that would let our stakeholders determine whether they should feel confident or not?” Subtle difference perhaps, but I think it makes a difference when combating biases.

      PS. I’m certainly not opposed to the idea of providing subjective information. Data needs interpretation to make sense imo. I just think it’s risky to have a confidence building/destroying mindset when gathering data. On the other hand, I recognize that I probably unconsciously enter into most testing situation wanting to find problems… Then again, this is what makes testing an interesting challenge.

  4. Pingback: The Confidence Thing | Tester Vs Computer

  5. Pingback: Perspectives on Testing » The Seapine View

Leave a Reply

Your email address will not be published. Required fields are marked *