Clarity is Courage

When software testing is done right by skilled people it’s an act of care. The way we talk about testing shapes how organizations perceive risk and affects the decisions they make about quality. So it’s our responsibility to ask the hard questions on behalf of project stakeholders, the business, and society.

In enterprise tech, companies perform a form of confidence theater using their size, reputation, and market share to create an illusion of trust that their tools are mature, safe, and increasingly these days inevitable. Testers have always faced a power imbalance when working on these systems but today, questioning the narrative can feel like questioning authority itself.

And that’s why clarity matters.

We’re living through another moment in history where the biggest players in tech are selling stories using words like “intelligence”, “reasoning”, and “agency” that AI systems do not and cannot possess. In my opinion, seeking clarity has always been an ethical imperative in holding technology accountable to the claims made about it. And as long as I’ve been in this business, hype and enthusiasm are regularly mistaken for and rewarded as insight, so seeking clarity is not always easy.

Clarity takes courage.

Software testing is intellectual, context-driven work and not just rote verification. My friends James Bach and Michael Bolton have always said that our tools should extend the reach of testers and support human judgment, not replace it. So when you ask hard questions or challenge claims made about AI in testing, you’re not being negative, you’re seeking clarity on the content AND the medium as both affect how we describe and think about our tools.

I love this article by Nanna Inie and Emily Bender “We Need to Talk About How We Talk About ‘AI’” about how the language around AI routinely (and intentionally) frames the discussion around AI as though machines think, decide, and make mistakes like humans. They argue that we need to clean up the language that shapes our thinking about AI because it isn’t just wrong, it’s dangerous because it masks the limitations of the systems, invites misplaced trust, and erodes accountability.

“A more deliberate and thoughtful way forward is to talk about “AI” systems in terms of what we use systems to do, often specifying input and/or output. That is, talk about functionalities that serve our purposes, rather than “capabilities” of the system. Rather than saying a model is “good at” something (suggesting the model has skills) we can talk about what it is “good for”. Who is using the model to do something, and what are they using it to do?”

Courage in testing, then, is not about being combative for its own sake. Its’ about recognizing and calling out unverifiable claims, highlighting unintended consequences, and putting the lie to structural “pressures” that large tech narratives would impose on society.

This is why in software testing, clarity is courage.


Discover more from Quality Remarks

Subscribe to get the latest posts sent to your email.

Leave a Reply