BrowserStack Talks: The Future of Software Testing

Had a great time talking to David Burns on the BrowserStack Talks Podcast (Spotify Apple) about all things testing in the age of AI – enjoy!

The Future of Software Testing

Keith Klain on AI, Risk, and Critical Thinking Will AI replace software testers, or just change the game? In this episode, industry veteran Keith Klain joins us to dive deep into the intersection of Artificial Intelligence and Software Quality Assurance.

Key Topics Covered:

  • The AI Reality Check: Why skepticism is a tester’s best friend.
  • Risk Management: How to handle AI drift and algorithmic bias.
  • Human-Centric Design: Maintaining control over testing processes.
  • Beyond Automation: Why AI tools need human agency to be effective.

Testing Shapes Reality

There’s a conversation about software testing we need to have despite the noise and hype around AI productivity gains, coverage expansion, and “AI-assisted quality”:

What kind of reality are we shaping when GenAI writes our tests?

As I’ve said before, testing isn’t neutral and when under the guise of progress, we outsource testing to systems that don’t understand intent, context, or consequence we introduce new risks that look like progress until they aren’t.

What we choose to test (and not to test) helps determine what we believe to be true, and our choices become part of the mental model the business uses to make decisions. As project evolve, those beliefs harden into assumptions with embedded risks.

Our approach to testing shapes reality.

But because most tests reflect only assumptions rather than risks, the reality testing shapes about system quality is often inaccurate and generates unwarranted confidence. And confidence should be fragile; it should be grounded and bound by objective reality.

Good testing, then, is not about confidence. It’s about a continual re-alignment with realityto inform your business decisions. Additionally, volumes of shallow checks reduce a team’s ability to notice signals for systemic failures which should further erode confidence but typically has the reverse effect.

Now add to that equation the mother of all automated-test-generator-algorithmic-defect-predict-O-nators and I fear where we’re heading…

In the era of continuous everything and automate first we generate more tests than ever, but I still find very few test teams that understand what reality their tests represent or whether they’re still meaningful. When tests accumulate faster than teams can understand them fragility increases within our tests, systems and teams.

A useful metaphor I use when talking to management about testing is comparing it to a lens. A lens can focus your attention to one thing, blur other details, but ultimately, it’s a way of looking at the system you’re building and what matters to your business.

When test generation becomes cheap and abundant, the shaping force of those tests – the lens through which we view quality – increases dramatically.

What I’m seeing from most of the marketing and demos from GenAI testing tools is a lens that focuses volumes of tests on what’s easy to see and blurring the importance of human experience: user workflows, organizational incentives, operational stress, and socio-technical interactions.

If you’re organization is considering (as most are) using GenAI to generate tests, here are some considerations I believe should be at the top of your evaluation process:

  • Do they create tests explicitly and visibly tied to user harm, business impact, operational failure, or regulatory exposure?
  • Do they build entropy into their tests with assumptions that expire as confidence decays over time unless renewed by new evidence?
  • What agency do testers have over test selection and execution in relation to coverage and analysis?

We’re just at the beginning of new chapter in the long story of software quality and testing, and I believe that how we frame the problem, how we view reality will be a key differentiator in the success or failure of our business.

Testing has aways shaped reality and GenAI has the potential have a powerful warping effect on that view and the propensity to move risk to areas we no longer look at due to volume, noise, and over-confidence from our leaders and vendors.

Daniel Kahneman famously stated “Overconfident professionals sincerely believe they have expertise, act as experts, and look like experts. You will have to struggle to remind yourself that they may be in the grip of an illusion”.

Clarity is required to dispel that illusion.

Clarity on intent, function, and the reality we’re shaping.

The Great Liberation Part III – Where Do We Go From Here?

Keith Klain - QMC

“The masters are liable to  get  replaced  because  as  soon  as  any  technique becomes  at  all  stereotyped  it  becomes  possible  to  devise  a  system  of  instruction  tables which will enable the electronic computer to do it for itself. They may be unwilling to let their jobs be stolen from them in this way. In that case they would surround the whole of  their work  with  mystery  and  make  excuses,  couched  in  well-chosen  gibberish,  whenever  any  dangerous  suggestions  were  made.” – Alan Turing

“Free your mind, and the rest will follow” – En Vouge

In my Great Liberation series (Part I, Part II), I argued that the way AI is being rapidly injected and measured in software testing, a discipline that profoundly shapes modern life, has moved to borderline recklessness. Banking, healthcare, transport, education, and government infrastructure all run on software we need to be reliable and AI adoption is racing ahead at a pace and human judgment can’t keep up.

The reason this matters is because AI is being treated as a substitute for critical thinking and responsibility rather than a tool that demands more of it. Poorly tested systems cause real harm, and when organizations replace human evaluation with automation in the name of speed or cost savings, they increase risks to their business and society.

Despite what you may have heard or read, AI in testing is not neutral or low risk. The dream being sold of closed loop systems validating themselves, with minimal external oversight will make failures more systemic, less visible, and harder to correct. More than ever, we need to be pushing back against the hype cycle selling AI adoption as inevitable, discouraging skepticism and trading on FOMO.

But unfortunately, the software testing industry has failed to meet the challenge.

I believe part of the problem is test tool vendors are increasingly led by private equity and investment firm CEOs with no testing experience or pedigree while pushing AI-driven testing but insulated from the consequences. Look at the Gartner Magic Quadrant for AI-Augmented Software Testing Tools and you will struggle to find tester led companies that position testing as a risk management activity and not sold as a magic bullet for cost reduction.

Moral hazard has all but left our business because the people who run it have never lived with the consequences of being responsible for ACTUALLY testing something. And if you think that’s harsh, look at the claims they make about their products:

“AI-native systems eliminate the need for human-created models or manual test maintenance”

“AI-powered automation dramatically increases speed and coverage/automates everything”

Most vendors don’t even get that specific and just spew AI buzz words like they’re going out of style: “AI solves every testing challenge,” “no human needed,” “fully autonomous defect prediction,” or “100× productivity.” My point is that this isn’t predetermined. It’s a choice, and frequently these days a choice being made for us, but how we test and deliver systems reflects our values – and we should value safety, transparency, and fairness.

But fairness cannot be achieved with unverifiable claims lacking credibility or grounding in reproducible evidence, something even Gartner warns is common with GenAI vendor marketing. In The Pursuit of Fairness in Artificial Intelligence Models the authors argue that fairness, accountability, and transparency in AI is not automatic, but needing deliberate design choices, ongoing evaluation, and requiring human oversight.

To get there we need clear-eyed views on the technology challenges and risks and cannot afford AI testing “thought leaders” making nonsensical claims about AI testing software better than humans, redefining accepted terminology to suit their marketing needs, or presenting hypothetical concepts with no evidence. Especially, as I think it’s going to quickly get a lot harder for testing.

A recently released paper dubiously co-authored by an AI (Claude, Anthropic), “Toward Robopsychology: Evidence of Emergent Behavior in Human-AI Partnership”, presents some interesting challenges for quality engineering. In short, the paper talks about when AI systems interact with us, we both generate behaviors that aren’t completely predictable from either the AI’s original design or our actions.

Field of psychology or not, this study is puts some shape around new challenges for software testing like verification of systems where human behavior and AI behavior adapt to each other, metric validity for emergent behavior, and what threats to “human in the loop” governance does the human bring to the party for automation and regression testing.

So where do we go from here?

Probably not down except in headcount, even though companies as big as Salesforce, who did a massive layoff under the guise of AI productivity are scaling back their LLM use after facing repeated reliability issues. Per MSN, Salesforce “confidence in generative AI has fallen sharply over the past year”, but I don’t think there are any signs of slowing down in the general integration of AI into everything, just more of a pause.

And it doesn’t look like government oversight is coming any time soon, with core parts of the much lauded EU AI Act likely kicked down the road to 2027. In another capitulation, the European Commission seems to be following in the US footsteps in throwing the environment and high risk AI implementations out of the window in, as Reuters puts it, “an attempt to cut red tape, head off criticism from Big Tech and boost Europe’s competitiveness.” In 2022 I was complaining about the lack oversight on an industry so far out in front of regulation for AI, but apparently it wouldn’t have made much of a difference anyway.

When I started this series it was a play on AI being the “great liberation” for companies to finally rid themselves of the responsibility of testing and all those painful testers. I assumed (correctly) we’d find useful partners in the software testing industry and a regulatory environment fully on board with removing constraints and guardrails in the name of innovation.

So in times like these, your strategy to implementation is more important than your technology. We see more than ever now that AI is not just a tool; it is a choice. It’s a statement on what we care about. How it is adopted, governed, and tested tells us a lot about what organizations – and the people who run them value.

As I’ve said before, in this brave new world your test strategy should focus on:

  • Algorithm transparency and explainability
  • Automated decision-making controls and human in the loop interventions
  • Governance for training data and model drift with measures for continual monitoring
  • Audit trails for traceability on workflows, decisions, and agency

As 2026 shapes up to be another wild AI ride, a principled, transparent approach to using these tools in testing led by values, free from buzzwords, and grounded in solid testing practices and experience is the science that will see us through this, or as the good Dr Feynman put it, “the belief in the ignorance of experts.” Good luck and enjoy!

The Great Liberation Part II – Measuring What Matters

Keith Klain - QMC

What can be asserted without evidence can also be dismissed without evidence. – Hitchens Razor

One of the biggest mistakes organizations make when building and testing systems is to measure it badly and then make decisions based on faulty and invalid metrics. Software testing has a long, rich history of goal displacement and metric validity problems when it comes to measurement and GenAI evaluation is currently running that same crucible.

Despite there being an abundance of metrics to look at model characteristics and performance: BLEU, ROUGE, hitchLLM-as-a-judge scores, or human preference ratings, what’s missing is any confidence that these metrics actually measure what matters.

I love the book “Measuring and Managing Performance in Organizations by Robert D. Austin and it’s my go to reference when talking about validity problems in software testing metrics. Austin wrote about this decades ago: when performance measures are shallow proxies for your objectives, people optimize for the measure, not the outcome.

The same principle applies to measuring AI models. If you give a system a metric, it will learn how to win that metric whether it creates value or not and human evaluation is a requirement to know the difference.

This point is echoed in a paper published OXRML: Measuring What Matters, which reminds us that metrics are not reality, they are models of reality and like all models, they’re wrong but some are useful. They also point out that precision is not the same as relevance, as a precisely measured wrong thing is still the wrong thing, or what I like to call: hitting the bullseye on the wrong target.

Most of the GenAI measurement models I’ve reviewed or had demo’d for me also seem to be making the similar mistakes we make in software testing by confusing measurement with understanding. I am going to be writing and speaking a lot more about this next year, but here are my thoughts so far on what I seen and used for GenAI model evaluation.

  • Popular evaluation approaches like BLEU or LLM as a judge feel like counting lines of code or test cases. Big numbers seem good, but they tell you very little about risk and just as more tests don’t guarantee fewer defects, higher semantic similarity doesn’t guarantee a better answer.
  • Metrics requiring a golden “correct” output remind me of my experience with decades of test automation frameworks optimizing for green lights and consistency instead of correctness potentially missing all the nuance in observation.
  • Another similar mistake I see is measuring for form, which has the same effect of counting how many bugs were “closed” rather than how many were fixed. Hallucinations can score better than a correct answer which aligns perfectly to Daniel Kahnmens point about the power of storytelling in Thinking Fast and Slow.
  • Most evaluation models like to use some sort of aggregation for scoring, but my concern about this is that averages can hide risks, a problem outlined perfectly in How Complex Systems Fail. Safety issues are masked by thousands of tests and the system looks stable right up until it isn’t.

Despite the testing industry taking a hard turn in the opposite direction to human evaluation, in my opinion it’s going to be a competitive advantage but doesn’t necessarily solve these problems. Human testing suffers from the same ambiguity, bias, and inconsistency and requires oversight and careful consideration when deploying systems.

While the GenAI measurement market matures, my advice right now would be to use multiple metrics each tied to a specific question and validate your tests and metrics against real outcomes, not just benchmarks. A healthy dose of humility in our business would do wonders as well, as to not repeat the mistakes of the past in failing to delivery ROI to our business due to invalid software testing metrics.

Uncommon Leadership Podcast

I had a great time discussing leadership in enterprise tech with Michael Hunter for his Uncommon Leadership interview series.

I’ve managed teams all over the world of different shapes and sizes and rarely get to talk about it, so I appreciated the opportunity to discuss continually learning and improving my approach to leadership.

We spoke about bringing your whole self to work, career management vs stewardship, building (and changing into) cultures that empower people, and how to help people navigate uncertain times….enjoy!

References in the discussion: EuroSTAR NSTC Pay360 KPMG UK The Secrets of Consulting: A Guide to Giving and Getting Advice Successfully Rethinking Expertise The 48 Laws Of Power 

Listen on Spotify
Listen on Apple

Agile on the Beach 2026

Very excited to announce I’ll be speaking at Agile on the Beach in July next year. AOTB has been a conference I’ve admired for a long time, and I am honored to finally attend and as well, talk about a subject that is close to my heart and have never spoken about in public before: ethics in technology.

Hope to see you there!

When the Whistle Blows: An Ethics Experience Report

Juvenal said “Honesty is praised and starves”, and nothing could be more true when it comes to whistle blowing.

Reporting unethical behavior is an incredibly stressful experience that is seldom ever rewarded and often comes with adverse personal consequences. Over the course of my 20+ year career in enterprise tech, I’ve been involved either directly or as a manager with multiple whistle blowing incidents as the result of organizational ethics complaints.

Through this talk I will present experience reports on of some of the most serious cases I’ve been party to including the process, lessons learned, and what I would do differently. So join me as I share the most challenging events that nearly broke me, but were always career defining moments of my life.

Old Biased Wine, New AI Skins

Putting together a good conference program is hard. Ensuring the topics are relevant and attracting talented speakers that people want to hear is only further complicated by the commercial aspects of covering costs and turning a profit for the organizers.

But a couple weeks ago, I happened to be walking out of the same conference talk as Richard Bradshaw, and we ended up having a chat about how we seem to be slipping back into “male only” line ups for not just keynotes but also track talks.

This idea that the “best” talks are what’s represented at conferences (as a lot of us just experienced) is just BS. As a veteran attendee, speaker, and selection committee member at many, many tech conferences, I can assure you that in my experience and opinion, bias and preference plays a great deal in who you see on that stage.

Earlier this year I gave up a speaking slot on a male only AI panel, which is particularly annoying as all the important research and hard work in AI ethics is being done by women. I get that striking the right balance between topics and underrepresented communities can present difficult choices for conference committees, but we can do a whole lot better than that.

I’ve written before about how little of my life experiences has been down to things I can control like effort and hard work, and the rest has just been dumb luck and privilege. So because of that, I am completely comfortable with limiting/rejecting speakers on gender/race and think the good it accomplishes far outweigh the occasional bruised ego.

Personally, I am unabashedly inclined to give preferential treatment to new speakers or underrepresented communities and feel that, although they are commercial ventures, it is the responsibility of conferences to have a social conscience and that we should be giving our money to organizations that recognize those two principles are not at odds.

So, although it might seem like progress has been made, even some of the best diversity programs have failed or just been cancelled outright in this new climate and this needs our endless vigilance.

Unless you see this issue as just politically correct cosmetics and not one of social responsibility or sound business sense, increased diversity in who we see at tech conferences is more than justified.

Test as Transformation – AI, Risk, and the Business of Reality

“If the rise of an all-powerful artificial intelligence is inevitable, well it stands to reason that when they take power, our digital overlords will punish those of us who did not help them get there. Ergo, I would like to be a helpful idiot. Like yourself.”
Bertram Gilfoyle

A while ago, I wrote a series of posts on how testing can support your business through a “transformation”, either through disruption, optimization, or by managing expectations around risk and what testing can do to help navigate digital transformation. As GenAI has stampeded into every aspect of technology delivery and innovation, I thought it only appropriate to add another entry into my “Test as Transformation” series.

I’ve always contended delivering technology value to your business will either be constrained or accelerated by your approach to testing, and adding AI into that risk is like injecting it with steroids. By layering black boxes into everything from customer service to decision support, testing has rapidly become one of the most important sources of business intelligence a company has and a true market differentiator.

Testing has always been about information – learning things about systems we didn’t know and confirming things we thought we did, but there are new risks AI presents for your business I’d like to highlight that testing should address.

Regulatory and Oversight Risk

Testing (when done right) should bring insights into risk, anticipate issues, and surface patterns which is even more valuable in an AI supercharged competitive landscape. As the AI regulatory environment continues to evolve due to technology changes and what seems like capitulation to big AI firms, we need to adapt to this “emergent” regulation and yet to be revealed enforcement.

Testing has always been at the heart of regulatory oversight, and I’ve spent a career in banking writing responses to regulators and answering internal and external audit questions. We need to be prepared for new AI oversight and evidence for:

  • Algorithm transparency
  • Automated decision-making controls
  • Governance for training data and model drift
  • Audit trails for traceability on workflows

Having the wrong strategy for testing AI enabled systems can introduce risk to your business through biased, unsafe, or inaccurate outputs that may violate new or existing laws as well as industry standards like the EU AI act, GDPR, or new product liability laws. I believe we are shortly going to be litigating harmful outcomes, privacy breaches, and misleading claims of which all could have been triggered by audits of weak governance and inadequate risk management.

Test Optimization Risk

The claims of test optimization and efficiency gains through automation are nothing new to the testing industry. Shallow automation-based checks have been the ROI currency of “modern” testing but just lead to overconfidence and very frequently larger systemic failures. Transformative testing must guard against the usual set of automated business risks and now address AI enablement like:

  • AI assisted test generation missing coverage faster and broader
  • AI powered code analysis injecting systemic patterns of fragility
  • ML based anomaly detection clouding operational visibility
  • AI driven customer behavior insights creating misleading test conditions

Managing the delta between AI-powered checks and human-led tests addresses the dangerous belief that AI automation equals AI quality. And just like traditional digital transformation, culture and incentives drive behavior that increase the threats to value in your business. Test automation failed to deliver on the core of its promised ROI, and AI enabled testing frameworks only seem to double down on familiar tropes:

  • AI will remove all experiential testing
  • AI will find and remediate all defects
  • AI can replace testing
  • AI systems “learn on their own” without oversight

Testing now needs to validate not only functionality but also non-deterministic behavior of AI systems to ensure fairness, transparency, bias mitigation, and regulatory compliance and cannot rely on traditional processes.

If testing is to survive an AI transformation, it means helping redefine how the organization operates, delivers value, and competes. AI can expand that definition by introducing new automation capabilities, intelligent decision-making, and previously impossible efficiencies. AI magnifies both the opportunity and risks associated with the mission of testing but must be paired with human judgment, domain expertise, and systems thinking.

We’re at the start of a global AI transformation and testing has to move from a supporting activity into a strategic information engine to keep your business competitive.

User Error

The tragedy of Adam Raines death by suicide at age 16 this year should have shocked the world into a renewed focus on accountability in Silicon Valley but instead, we heard the familiar excuse from the no-man’s land of moral hazard – user error.

In the first court filings from OpenAI they defended themselves by victim blaming. “Misuse” and “unforeseeable use” are cited even though according to the parents claims, ChatGPT basically gave him instructions for self-harm including offering to help write the suicide note.

Using legalese and technical “terms and conditions” arguments to defend jeopardising a vulnerable teen only further lays bare the morally hollow position the company has taken. GenAI is an incredibly powerful tool that must come with responsibility instead of marketing gimmicks – we now see what’s at stake.

And not knowing is no longer an excuse.

The research paper, The Illusions of Thinking published by Apple earlier this year is a pretty chilling read in this context. We know that as the complexity of problems hit a threshold, the models basically collapse and their accuracy and the reduction in “thinking” drops dramatically.

That’s not just a problem for automating a business workflow with AI, it speaks directly to the complex problems being fed into these models by everyday people. People who struggle with depression, loneliness, and other mental health issues requiring empathy, nuance, and a depth of humanity not possible with GenAI.

Further to that, in 2023 the G7 countries agreed to implement a code of conduct for developing “Advanced AI Systems” including “risk-based design, post-deployment safety, transparency about limitations” and accountability when things go wrong because these systems require more than a TOR like an app that plays music on your phone.

The point is, we know to do better and are on the familiar path to let big tech off the hook on the grounds of “user error”, which ultimately treats a teenager’s tragic death as just a design flaw.

IEEE Report: How IT Managers Fail Software Projects

“Few IT projects are displays of rational decision-making from which AI can or should learn.”

This report from by Robert Charette for IEEE Spectrum on “How IT Mangagers Fail Software Projects” is just fire and should be required reading for anyone looking to reuse or “train” their existing management into a crack team to deploy AI into their systems.

If you’ve been working in the software development business for any length of time, most of what’s in this report won’t come as a surprise: poor risk and investment management, bad process, shoddy leadership, and a near endless ability to refuse to learn from past mistakes. And we keep handing the keys back to the same people that drove the last car into the ditch!

It’s a well researched piece and I would recommend you spend the time reading it, as if we know anything about most software projects, it’s that they seldom learn from their mistakes and AI will only increase the risk and opacity!

He lays it out plain and simply here:

“Frustratingly, the IT community stubbornly fails to learn from prior failures. IT project managers routinely claim that their project is somehow different or unique and, thus, lessons from previous failures are irrelevant. That is the excuse of the arrogant, though usually not the ignorant.”

Ouch!

Agile Testing Days – 2025

It’s been a few years since I made the familiar journey to the Dorint hotel in Potsdam, the annual home of the Agile Testing Days conference, and I was very happy to be back in 2025. Personally, I was excited to be there just to hang out with some friends who due to global distribution and the pandemic, we’ve not had more than an online relationship in recent years.

But this was more than a holiday and I attended quite a few sessions, listened to a LOT of talks, and had some pretty serious conversations with folks about the future of our business as yet again, the software testing industry I love and more importantly, the people who work in it are threatened by what feels like kids playing with toys in AI slop.

But first the highlights for me… (all the videos should be posted by ATD on YouTube)

Elizabeth Zagroba and James Lyndsay gave an incredible keynote, Testing Transparently where they actually testing something live – which is a rare and courageous thing to do including the audience participation. It was so refreshing to watch two experts, working their way through a system and demonstrate the benefits, constraints, and power of pair testing – something more testers (and conferences) should do! And I even got a chance to deliver some long overdue payback by roasting my pal Elizabeth on stage at the open mic for all her years of roasting my talks on Twitter!

It didn’t really do what it said on the tin, but Martin Hynie gave a fantastic and authentic experience report about developing an AI evaluation and testing model at Credit Karma. It was raw, full of details, and personal – everything you want in an experience report. What I took away from it was that pausing for thought, continuing to ask “why”, and having a set of principles and ethics guiding your work are going to not just be important but essential to success in whatever capacity you implement AI. Excellent talk.

Angie Jones was well, Angie Jones – charming, technically excellent, and damn entertaining as she delivered an updated version of her Air Fryers talk to include the work she’s doing at Block with their MCP Goose. The presentation was great, but it came to life in the Q&A afterwards where some tough questions got asked regarding implementing MCP and agents in testing. Angie did a great job not sugar coating anything as all the answers aren’t there yet and in a lot of cases, we’re still figuring out the right questions.

The highlight of the week though, was Melissa Eaden and her talk The Cautionary Tale of Generative AI, which frankly should have been a keynote. It was a straight forward, no nonsense breaking down of the real world consequences, court cases, and shocking morality gaps in the world for IP, copyrights, and ethical standards in using AI. I want to give Melissa credit for her bravery and guts for giving this talk at a time (and conference) full of AI hubris and disturbing lack of risk awareness and duty of care.

Which brings me back to my initial point, as we seemed to have ceded the leadership of the use of AI in testing to lunatics lacking either the experience or principles to guide the work.

Alan Turing said it perfectly when he predicted how leadership would react with “well chosen gibberish” as the curtain was being drawn back on what they’re doing. I’d like to think my advanced years had mellowed me a bit, but I don’t think we’re actually fighting back against the endless word salad firehose we’re all being sprayed with enough! Out of one side of their neck they talk about the human side of testing (like that’s a new thing) and on the other, tell you “AI” won’t replace you – a tester “with” AI will replace you.

But let’s get one thing perfectly clear – these people ARE trying to replace you and are telling you EXACTLY that! By their own admission they have been trying for decades, as the AI in testing industry has become just another front in the war of enshittification, dehumanization, and removal of all accountability from technology for the impact of the systems they unleash on the world.

Angie made a point in her talk that the testing communities she speaks to have been the most resistant to adopting AI in their work and you know what, that made me incredibly proud of us. Testing is about information, managing risk, asking hard questions and in the face of what I’ve seen the last couple years and reinforced this week, we NEED that community – unwavering, clear eyed, badass testers more than ever!

In closing the conference, Pradeep Soundarajan delivered a scathing indictment of the current thinking and leadership of the testing community. Reminding me of his early days at Moolya and taking on the testing industry in India. He was right now as he was then, that to make a better world, all we need from our leaders – and mostly ourselves is courage.

The testing panda – then and now

I give the organizers a lot of credit for putting together an all star group of testers who haven’t been in the same room for a long time and as usual, the vibe and fun were omnipresent at ATD. I heard a lot, talked a lot, met a lot of people I’ve never seen in person, and as is always my favorite – got to meet a lot of very enthusiastic testers which gives me hope for the future.

And ending on a positive note, I have to thank Santosh Tuppad for the all the laughter over the week. It’s been a long year personally, and hanging out with you put things in perspective and was good for my soul…love you brother!

Alun Turing FTW

Ooof! Alan Turings 1947 lecture on the “Automatic Computing Engine” still hits hard…

“couched in well chosen gibberish” is exactly how I’d describe what I’ve seen for AI in testing leadership right now…

The State of AI in 2025: McKinsey Report

McKinsey just published their report on The state of AI in 2025: Agents, innovation, and transformation and aside from the usual overly optimistic responses around ROI and resource reduction, it had some interesting (and troubling) data on AI risk and mitigation.

The online survey of nearly 2000 respondents was conducted in June and July this year from what appears to be a fairly large sample size of companies, regions, and industries from all over the world.

As expected, most of the enterprise is still experimenting with AI and agents and haven’t really begun to scale their efforts. Going by most of the large organizations I’ve worked with, I suspect this is primarily due to the inefficiencies of any global operation and large scale duplication of efforts.

Another unsurprising part of the report is that most of the money spent on AI is being looked at as pure innovation investment and hasn’t returned anything to the bottom line yet in regards to EBIT. Most of the organizations have set some objectives for AI but it looks like we’re still in the FOMO phase for the bulk of the money being thrown at AI pilots in the enterprise.

Interestingly, as companies start to figure out how to implement all this new tech, it appears that we’re in for a new version of the slightly less successful “digital transformation”. Redesigning their business has to start with the underlying workflows, and taking that on the chin from an operational cost basis will be all of us as the excuse that “AI is making us more efficient so we can fire people” loses credibility.

Looks like layoffs are back on the menu!

More worrying from the report is the realization of AI related risks and the lack of mitigation strategies from heavy AI users. Per the report, McKinsey has “consistently found that few risks associated with the use of AI are mitigated” including risks around “personal and individual privacy, explainability, organizational reputation, and regulatory compliance”. In fact they found that those risks have only grown since their last version of the report in 2022.

Most troubling was that almost a third of the companies participating in the report stated they have already experienced negative consequences from problems with AI accuracy. This makes sense as it is the biggest area to test for risks as you deploy AI and especially generative models and try to integrate that output into your business processes.

As more AI gets deployed into the enterprise, my instinct is that explainability and regulatory issues are going to quickly jump up the charts for priorities as they can run directly into the legal system of you get them wrong. Regardless, looks like 2026 is going to be another big year for AI adoption and the quality engineering community better be ready!

(Testing) Mind over (Developer) Matter(s)


“The secret of life is honesty and fair dealing. If you can fake that, you’ve got it made.”
Groucho Marx

For as long as I’ve been in the software testing business, a consistent myth I’ve encountered is that developers and testers share the same mindset. Usually this is accompanied by the view that testing is just an “activity as part of development,” and therefore developers can do as good a job as testers at it and in many cases, are positioned to do a better job.

Ironically, the reality is that the people most confident in their ability to evaluate a system’s quality are usually the least able to do so with any objectivity. In just about every other discipline of engineering this isn’t a hot take or viewed as a criticism. It’s human nature.

So today, as AI has entered our daily lives, in the face of the enormity of the task of testing artificial intelligence systems, the idea of using LLMs to judge their own output has become mainstream.  But pretending that developers alone can test or limited “human in the loop” checking has gone from a novel optimistic opinion to borderline reckless.

The 2019 paper Developer Testing in the IDE: Patterns, Beliefs, and Behavior reported a large-scale field study with over 2000 software engineers over 2.5 years in four IDEs and the results were pretty straight forward: developers overestimate how much testing they do, misjudged the scope of that testing, and displayed large gap in perception between what they thought they did and what they actually tested.

The finding won’t come as a surprise to anyone who has professionally built software or been personally responsible for delivering systems with real consequences:

  • half of the developers didn’t test at all
  • most development sessions ended with zero testing
  • a quarter of the test cases were responsible for three quarters of all failures
  • TDD is not widely practiced
  • developers spent 25% of their time writing tests – but think they spend half

As I (and lots of other professional testers) have written before, the problem lies with confirmation bias. Most people don’t even think about it when you’re creating something, you’re too busy unconsciously looking to see if it works the way you intend. And when you test it, you’re not testing for failure or ways it might NOT work – you’re seeking validation.

Independent testers, if they’re doing their job right, should be professional skeptics. They view the exact same system and look for ambiguity, known and unknown risks, confusing behavior to investigate. It’s an entirely different mindset that you just cannot “context switch” out of as its deeply rooted in experience, mission, and the mental models we use to navigate the system under test.

So that brings us to the super-charged confirmation bias machines called LLMs and their role in testing and evaluating generative AI.

The 2021 paper On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? carries with it a warning to anyone thinking AI should be used a judge of correctness in testing.  As is well documented by the authors, LLMs can’t evaluate what’s true, they predict patterns that resemble truth, and despite how convincingly it may seem, they don’t understand or have empathy, they only reassemble data.

And the larger these models are, the harder they are to test but are increasingly used to evaluate code, designs, architecture, and even defects. This risk in isolation could be mitigated through scale, but that’s not what is happening. The pressures of implementing AI for business ROI coupled with a “continuous everything” culture on steroids means teams are effectively outsourcing their front-line critical thinking skills to a confirmation bias monster parrot.

And LLMs are designed perfectly to amplify those pressures and bias.

Testing LLMs and systems either AI native or enabled must do more than just simulate developer validation or fill the gaps left by their testing, it must provide critical, adversarial analysis of the system under test. At risk of being accused of my own confirmation bias, I believe that the future of software quality and testing will be shaped by skilled testers who can challenge assumptions and articulate business risk, not just parrot what makes us feel good about our work.

“Does it work” needs to be replaced by “What if it goes wrong”.

I don’t yet know exactly how to answer the question of where humans should sit in the evaluation of LLMs at the scale they are being deployed and relied upon, but I do know that how we answer that question is quickly becoming the most important differentiator in testing – and possibly all of technology.

EU Privacy Under Fire from Big AI?

“European Commission accused of ‘massive rollback’ of digital protections”

Could be not great news for consumers and vulnerable communities if this goes ahead, from the article:

“The commission also confirmed the intention to delay the introduction of central parts of the AI Act, which came into force in August 2024 and does not yet fully apply to companies.

Companies making high-risk AI systems, namely those posing risks to health, safety or fundamental rights, such as those used in exam scoring or surgery, would get up to 18 months longer to comply with the rules.”

Industry is already so far out in front of regulation we need to STRENGTHEN these measures, not delay them further.

The EU AI act categorises “high risk” systems into two types:

1) AI systems that are used in products falling under the EU’s product safety legislation. This includes toys, aviation, cars, medical devices and lifts.

…and more worryingly:

2) AI systems falling into specific areas that will have to be registered in an EU database:

– Management and operation of critical infrastructure
– Education and vocational training
– Employment, worker management and access to self-employment
– Access to and enjoyment of essential private services and public services and benefits
– Law enforcement
– Migration, asylum and border control management
– Assistance in legal interpretation and application of the law.

We don’t need another 18 months to consider if this is a good idea and as well, in some cases the horse has already left the barn for these protections.

There is also a widely held reading of this as an attempt to rewrite privacy laws to grant exceptions for AI companies in order to encourage innovation through training their GenAI models.

We have to do a lot better than this to win and maintain public trust for artificial intellignce or big tech.

TESTA 2025

Big congratulations to the KPMG UK Quality Engineering and Testing team for winning the Leading Vendor in Service Delivery and Consulting award at the National Software Testing dinner. A great group of people and very deserving of the recognition by the software testing industry. Congratulations!

The Search for the Real In Software Quality and Testing

“Eliminate the unnecessary so that the necessary may speak.”
― Hans Hofmann

As AI integration FOMO hurtles us towards even more pervasive technology, the testing of AI models for correctness and most importantly, their potential for harm becomes paramount to their success. That testing has to be underpinned by principles and values to guide the observations and reporting, so I was inspired by Maaike Brinkhoff bravely taking on the meaning of “quality engineering”, as well as my multiple conversations with Michael Bolton on similar lines to try to put into words some views I’ve not published in the past.

When I was at university, I had a fantastic art professor Lyle Salmi who really challenged me to think differently about composition, perception, and the creative process. He turned me on to Hans Hofmann and some other abstract artists which only furthered my mild obsession with Jackson Pollock and exploring the constructing of things creatively.

Hans Hofmann wrote in the “Search for the Real” about trying to move beyond imitation and finding truth through expression. At that time in my life, 20th century abstract art was more about representing ideas than directly trying to copy life – art was about the experience.

This isn’t an always an easy way to think about the creative process (or life in general), but I feel like trying to represent what it means to be human entirely by the output of the artistic process is not only impossible, it’s misguided. What makes us human or “real” are the intangible things: joy, sadness, love, fear, grief, happiness – all the things that can’t be measured or counted.

Software testing has the same problem. Our business (and the vendors that dominate it) has more than a mild obsession with counting things to check if something “worked”, which over the course of my career has only resulted in reducing what “passes” for testing to a shallow performance.  The current AI mania has only accelerated the descent into mistaking running tests for testing and replacing manual, <shudder> human testers with automation, as if quality were a problem to be solved.

Complex systems like financial markets, relationships, or software used by those unpredictable things called people, are ecosystems. Treating them as complicated problems that can be “solved” through deconstructing them into test cases is as useful as trying to judge a painting entirely by how many brushstrokes were used. You can count every movement in a Pollock and miss the entire point.

Ocean Greyness – Jackson Pollock, 1953

And that brings us to the endless existential threat from the values displayed in the software testing business.

The current crop of AI bros would have you believe that the mission of testing is to reproduce human behavior through some sort of algorithm and the only important thing learned through this process is a green check. But that view again misses the point entirely – risk to your business isn’t managed through failed automated checks, it lives in empathy, intuition, observation, all things that can’t be scripted through your algorithmic-defect-predict-o-nator.

If testing is a performance than its interpretation can’t focus on mechanics alone and more than ever, our systems must be evaluated for more than if they just “work”, but the impact of that work on the world.

I used to live in New England, and one Father’s Day my family took me to East Hampton for a tour of Jackson Pollocks studio. Standing on that floor, breathing the air, looking at the jars of brushes ticked a big one off my bucket list, and as it was a quiet weekend and we were the only ones there, the guide let me back in after the tour to check it out by myself.

I laid in the middle of the studio for a long time trying to imagine what went on in there – the sounds of his feet shuffling, the wild and controlled movements, the smell of cigarettes and the wood burning stove – all as important to his art as the final “product”. It meant more to me than all the time I’d spent just looking at his paintings and when I see them now, that experience – that expression of his art has deepened my understanding into his and my own humanity.

I don’t know exactly where we’re heading in the testing business with the integration of AI, but software systems are actually alive – alive with human interactions, expectations, and emergent behavior and testing is how we make sense of it in context of what matters – people. I’ve always believed that testing is both art and science and to reduce that to output checking is to deny what makes testing – and what it means to be human – real.

That’s what’s always been true about testing, and in my experience the people and companies who understand that and apply to their strategy will see it through…

Testing AI at KPMG UK

Very pleased to announce that I have been asked by KPMG UK to lead their strategy for testing artificial intelligence software, including the integration of AI into their own businesses.

The software testing industry has faced multiple challenges over the course of my career, but few tools have had as much potential for risk to your business as a poor quality AI implementation.

Over the coming months I’ll be working on the risk-based approach and test automation for AI systems including the incorporation of our Trusted AI principles with my colleagues in the Quality Engineering and Testing team.

I look forward to continuing to write and speak about responsible AI and what it means for quality engineering and testing.

Get in touch or follow me here if you want to talk about what we have planned…thanks!

To Infinity and Beyond! The death of test engineering… (TAD 2026)

Can’t wait for Test Automation Days 2026 to unload 20+ years of pent up frustration with the test automation business! You might agree or disagree with me, but it’ll be entertaining for sure…hope to see you there!

To Infinity and Beyond! The death of test engineering…

One of the few benefits emerging from GenAI mania has been the acceleration of the long overdue death of test engineering. For decades we’ve excused the ROI of test automation never materializing for a business watching testing costs rise, headcount increase, and software quality stagnate (at best).

Through this talk I take a look at self-healing systems, GenerativeTest-o-nators , and autonomous testing platforms that might finally stop test automation engineers from trying to count to infinity. So brace yourself and buckle up!

QR Podcast – Paul Holland

Paul Holland is an expert at transforming how organizations test to be more efficient, valuable to stakeholders and finding important bugs fast. He’s built and lead test teams all over the planet and in my opinion, is probably the best hands-on test director I’ve ever worked with in my career.

He’s one of my closet friends in the testing business having worked, taught, and spent some time in the barrel with each other and is always a good laugh. Listen in as we talk metrics, great test reporting, Taking Testing Seriously, training testers and just generally give each other a hard time…enjoy!

Honorable mentions: Susan Finley, Vernon Richards, Huib Schoots, Eric Proegler, AST, ATD James Bach, Michael Bolton

Published! Taking Testing Seriously: The Rapid Software Testing Approach

So happy to announce that Taking Testing Seriously: The Rapid Software Testing Approach by James Bach and Michael Bolton has been published and is available to buy!

I’ve been a fan of RST and their software testing training program for a long time and consider it the only serious way to build competent testers.

I was also incredibly honored to be asked to contribute a chapter on “How Testing Looks to Management”, including my thoughts on what’s worked in building credible test teams throughout my career. (Excerpt below…)

I really hope you enjoy it, a lot of people worked really hard for a long time on this, and it’s inspired me to start working on my own book…enjoy!

Cc: Special thanks to Huib Schoots Marius A. Frâncu

How Testing Looks to Management

“Projects suffer from unwarranted optimism because they don’t have good headlights and rely too heavily on biased systems for navigation. If you’re looking for confidence to release something, my experience is that you’re going to see what you want to see.

It’s like when we’re driving: we have all these things to help us understand and control things about the car, all these monitoring systems, and the steering wheel and our eyes and senses. When they’re right in front of you, you’re going to have a bias towards looking at those things. And that’s great—on the assumption that we’re on the right track and there’s nothing in our way, but the dials and speedometer and tachometer don’t show us what’s actually in front of us on the road.

Testing is meant to provide some distance from the dashboard, to help you see what’s on the road ahead of you and around you, not just what’s right in front of you. You want information that’s at some critical distance from your processes or project or the context in which you’re building the system.

Not so Psychologically Safe

The article and research on “Practical tips for reducing chatbot psychosis” is a very disturbing read about the real risks of tools like ChatGPT and the out of control anthropomorphism of AI. Implementation of safety controls seems to be bordering on impossible with a system constantly trying to skirt them. From the article:

“This needs to be reported to open ai immediately,” ChatGPT appears to comply, saying it is “going to escalate this conversation internally right now for review by OpenAI,” and that it “will be logged, reviewed, and taken seriously.”

Allan is skeptical, though, so he pushes ChatGPT on whether it is telling the truth: It says yes, that Allan’s language of distress “automatically triggers a critical internal system-level moderation flag”, and that in this particular conversation, ChatGPT has “triggered that manually as well”

A few hours later, Allan asks, “Status of self report,” and ChatGPT reiterates that “Multiple critical flags have been submitted from within this session” and that the conversation is “marked for human review as a high-severity incident.”

None of that was true.

The research into this incident is indicative of an AI industry that is not taking safety seriously and the increasingly pervasive artificial intelligence technology ecosystem should give everyone pause for thought about adoption and usage.

The implications for quality engineering and software testing are immense and the focus on implementation, usage, risk, and harm are no longer a “nice to have” for your testing approach. We have a duty of care to those most vulnerable and understanding the wider impact of the systems can only make our efforts better, or as the author said:

“There’s still debate, of course, about to what extent chatbot products might be causing psychosis incidents, vs. merely worsening them for people already susceptible, vs. plausibly having no effect at all. Either way, there are many ways that AI companies can protect the most vulnerable users, which might even improve the chatbots experience for all users in the process.”

Risk Lives in the Details

Risks to your business are found in the details, so get deep into it with your teams. I cannot bang on about this enough, and am consistently surprised by how little it’s talked about in the software testing industry.

I still read EVERYTHING produced by a project, review the controls, processes, culture, and anything else I can get my hands on to see how their approach to testing is putting the business at risk.

I’m currently working on an enterprise AI transformation and in the rush to get things done, GenAI mania, and a healthy dose of FOMO, a fair bit of risk management is being missed or glossed over. No one thing is at fault, but all of them working in concert are punching holes in risk management.

AI systems are inherently complex and with many producing non-deterministic output, very difficult (if not impossible) to test, so a clear-eyed view of risk is paramount.

If you’re not familiar with “How Complex Systems Fail” by Richard Cook, check it out today as it’s a goldmine for ideas to review systemic risk. Here are some examples you can use as a lens to view your work:

  • Catastrophe requires multiple failures – single point failures are not enough.

The array of defences works. System operations are generally successful. Overt catastrophic failure occurs when small, apparently innocuous failures join to create opportunity for a systemic accident. Each of these small failures is necessary to cause catastrophe but only the combination is sufficient to permit failure.

  • Complex systems contain changing mixtures of failures latent within them.

The complexity of these systems makes it impossible for them to run without multiple flaws being present. Because these are individually insufficient to cause failure they are regarded as minor factors during operations. Eradication of all latent failures is limited primarily by economic cost

but also because it is difficult before the fact to see how such failures might contribute to an accident.

  • Complex systems run in degraded mode.

A corollary to the preceding point is that complex systems run as broken systems. The system continues to function because it contains so many redundancies and because people can make it function, despite the presence of many flaws. System operations are dynamic, with components failing and being replaced continuously.

Good questions to be answered by test management…

  • Does our test approach look at systemic risk and where multiple individual vulnerabilities could compound into catastrophic failure?
  • Have we accounted for change that might not be directly relevant to our test plan that could trigger latent issues?
  • Do we understand where our systems ALREADY broken (defect data) and where weaknesses could exist but not readily apparent through our test strategy?

Good luck and keep testing!

QR Podcast – Fiona Charles

Fiona Charles is an absolute legend in the software testing world and I had the honor of sitting down with her to discuss tech ethics, the human side of technology, her adventures with Jerry Weinberg, and her award winning storied career.

Fiona is an encyclopedia of software delivery techniques for communication, risk management, and how to deliver projects with integrity. I’ll do my best to add in all the links to every resource she mentioned so enjoy!

Links in Podcast: EuroSTAR Ministry of Testing AST Cast Agile Testing Days Maria Kedemo Michael Bolton James Bach Secrets of Consulting Rethinking Expertise Cynefin The Gift of Time Agile Manifesto The AI Con ACM Code of Ethics BCS Code of Ethics

The Great Batman Debate

The Ministry of Testing has a great tradition at their conferences called “99 Second Talks” where anyone can get up and talk about anything for 99 seconds. It’s a great way to introduce new things, give speakers an opportunity to get on stage without a CFP, or work out ideas for future talks. I’ve never given a 99 Second Talk, so at TestBash 2025 I decided to give it a go.

I shared a story that if you don’t know me, you probably didn’t realise I went through something this summer that taught me a lot about overcoming adversity, resilience, and picking yourself up from defeat.

This summer I decided to take on my teenaged son in a structured debate about whether or not Batman was an antihero – my position being that Batman IS an antihero.

What started out as a fun conversation over a burger quickly escalated into a full blown debate with weeks of preparation, presentations created, and a LOT of trash talking leading up to the big day. I’ve always considered myself a fan of Batman, but my son has a knowledge of the character, his history, the comic books, and lore well beyond me. Despite the clear disadvantage I’ve always liked a fight and kinda debate quality engineering for a living, so I felt I had a punchers chance and threw down the gauntlet.

The judging panel, comprised of my long suffering wife and older son had to sit through our 20 min presentations and then 10 min Q&A after being briefed on the “canon” we agreed on which to base our arguments. We also agreed definitions of “hero” and “anti-hero” including three categories we had to make arguments “for” or “against”: how well they fit the definition, how well their motivation matched the description, and how it was evidenced through their actions.

Needless to say, I got my ass handed to me, completely destroyed, defeated on every point in our agreed terms – he mopped the floor with me.

My Batman Anti-Hero arguments…

I felt I had landed some punches and had done a lot of reading and research including a really good book on Batman and Ethics which explores the complexities and contradictions of Batman’s aversion to killing, the moral status of vigilantism and his use of torture in pursuit of justice. Ultimately though, I was way out of my depth and my arguments had some pretty big gaps coupled with some fundamental misunderstanding of the character and stories.

It was really fun in our nerdy family way and I was super impressed with my son’s presentation, research skills and flair for the dramatic in his talks – something he probably inherited honestly. It also reminded me of some home truths I’ve learned in the software testing business that everyone (not just in testing) should take to heart:

  • Have courage to challenge the experts – even in their field of expertise…
  • Preparation makes up the biggest part of success…
  • Failure is your greatest teacher – unless you fail you probably haven’t learned anything useful…
  • Winning an argument is rarely done by facts and data alone…

Anyway, it was a really fun experience and I can’t wait until our next debate on a less controversial topic: which Superman is better, Synderverse or James Gunn! LoL!

Software Testing GriftopAI

The grift in the software testing business never ends…

I’ve spent a LOT of time lately reviewing docs, sitting through demos, listening to “experts”, and enduring the bombardment of AI slop being hurled from testing vendors and let me tell you – ain’t nothing new under the sun in our industry.

Testing hasn’t changed. Testing hasn’t fallen behind. Testing isn’t the bottleneck. Testing isn’t actually the problem.

**The problem is a vicious cycle of unsustainable rates of change requiring endless system #enshittification to meet the demands of an increasingly pervasive technology ecosystem run by bonus-driven caretaking management.**

But that hasn’t stopped the AI grift from going into overdrive – selling solutions that don’t exist for problems that aren’t real to people who are quite happily using that as cover to fire people they never should have hired in the first place.

If you work in software testing here’s my advice:

– Learn everything you can about AI and the language being bandied about to figure out your entry into that world to use the “right words”. A LOT of what I am seeing is just old concepts being renamed, redescribed, or just hijacked for marketing purposes. Old wine into new skins…

– Learn about test design, experimentation, exploratory testing, and how to TALK ABOUT RISK. These are the skills that have never gone out of demand and will be super important if any of this AI mess gets to production…

– Learn about the regulatory environment as despite my scepticism about enforcement, there are a whole bunch of new implications of what’s real, who’s at fault, and unasked questions about agency and pushing slop to production…

Every bubble bursts (or at least deflates a little) and as I’ve said before, I don’t think all these “early movers” have an advantage over people taking their time for some critical thinking. Frankly, the testing business doesn’t seem to have a clue right now anyway, so concentrate on core skills, learn the lingo, and watch the firings continue (I’m looking at you test automation engineers)…

Enjoy!

The price of doing (AI sloppy) business…

Unfortunately, instead of curbing behaviour this will probably just get priced in to using AI slop in your business, just like fines in financial services…

Leading with Quality/TestBash 2025 – A short review…

Back at work after spending last week in Brighton delivering part of a workshop on Leading with Quality for the Ministry of Testing and then hung out at Test Bash 2025. I had a great time catching up with old friends and meeting a lot of new people and actually got to listen to the talks which frankly, is a luxury I don’t typically get from being a speaker.

Delivering my part of the Leading with Quality workshop was fun and a little hectic trying to get all the content and discussions into an hour slot, but I think we got there and the feedback has been great (so far). My part was about getting (and keeping) a seat at the table for testers which started with WHY testing doesn’t typically get involved in decisions that affect time, money, or people. Testing is a function of risk management, and the value proposition is context dependent so the quickest way to undermine testing’s credibility (and your own) and common mistakes I see testers make are:

  • Not understanding or speaking the language of your project…
  • Making testing personal…
  • Word policing…
  • Not having a pragmatic approach to quality and risk…
  • Not actively listening to the team, management, or your clients…

Your ideal form of influence is first to help people see their world more clearly, and then to let them decide what to do next – risk management is your objective, not testing.

TestBash 2025 – Leading with Quality

As for the rest of the conference, the highlights of the week included “A day in the life of a Quality Lead” a great experience report from Elizabeth Zagroba. I am probably biased, but I have always enjoyed Elizabeth’s open and frank delivery of her talks and I would highly recommend her blog being on your “must read” list and comb through the backlog for great writing on glue work, ET, and check out her Friends of Good Software conference.

I always love when I get to see new speakers or folks giving their very first conference talk, and amongst the multiple first-timers, Demi Van Malcot gave a great talk about continuous quality and the various forms it takes when trying to implement the model. She did a fantastic job despite having a lot of technical difficulties at the start which would throw off a lot of veteran speakers, so great choice MoT and hope to see her speak again. (BTW, I think Dan Ashby is owed a quarter…)

By far, the best talk of the conference was Joep Schuurkes and his presenting of the test and development approach being used in the Dutch Municipal Elections of 2026. Aside from having the best flow chart on defect management I’ve seen in a long time, Joep went into great detail about how agile techniques were being applied, whether or not the worked, the implications of working in public, and some real practical examples and advice on how to get such an important project done right.

I have to admit, at times I was left wondering what decade I was in while listening to a lot of the talks, which is probably a function of being in this business for so long but also by the lack of investigation and curiosity of our community. I love it when people new to testing discover something on their own or come to conclusions that are aligned to the years of research and resources available on testing, but the lack of fresh takes or ideas was a little depressing.

All in, it was a great couple of days learning about some new stuff in our business, confirming some things I already knew, and mostly just feeling really grateful for the friends I have in this business (go and buy a Risk Storming deck!)…enjoy!

Test Automation Days 2026

I’m very happy to announce that in March 2026 I’m joining a fantastic line up of keynotes Test Automation Days including:

Anne-Marie Charrett, Principal Automation Testing
Marit van Dijk, Developer Advocate
Richard Bradshaw, Senior Architect, Build & Podcast Producer, The Vernon Richard Show

Early bird tickets are available until November 1, 2025
Hope to see you there! Register HERE

EuroSTAR Testing Excellence Award

Earlier this month at EuroSTAR in Edinburgh, I was unexpectedly honored with the Testing Excellence Award after being nominated by my team at KPMG. Undeservedly receiving the same award as one of my heroes, Jerry Weinberg, I was also introduced on the evening by one of my mentors and friends Michael Bolton.

As I said on the night, software testing is a team sport, and for over 25 years I’ve been working with some of the best testers from around the globe (12 countries) on countless projects for Fortune 100 companies. One of my most proud achievements in managing the careers of software testers is that my teams have produced a CTO from UC Berkley, 11 Global Heads of QA, 7 Test Directors, 11 Vice Presidents, 2 Global Product Owners, and 2 Heads of Engineering from companies all over the world. 

I have also tried to contribute to the software testing industry body of public knowledge through publishing 100s of articles and interviews on Quality Remarks, LinkedIn, and loads of personal and professional journals/blogs. I’ve also had the privilege to present talks and workshops at testing and technology conferences and meetups all over the world on topics including test strategy, managing test teams, and working with C-level sponsorship including: AST CAST, QAI, EuroSTAR, STARCanada, STAR East, KWSQA, BCS, MOT, ATD, RTC, Copenhagen CT, STP Con, QualityJam, NSTC, Tricentis Accelerate, and expo:QA. 

Whether being elected to the Board of the Association for Software Testing and serving as its Executive Vice President, developing a training and job placement program with the non-profit Per Scholas, or working with Roya Mahboob to get the Afghan Girls Robotics Team to ATD, I’ve also tried to give back to the community through volunteering and social justice and equality initiatives.

I owe a great deal to the international testing community and have been incredibly fortunate for all the speaking, publishing, mentoring opportunities to try to make positive changes for testers and technologists all over the world. It was an honor to receive this award and I will forever be grateful to the folks at EuroSTAR and my team for the recognition of my contributions to the software testing industry.

Thank you!

AI in Software Testing Starter Pack

My starter pack reading list for AI in software testing…please feel free to add to the list in the comments! We got a lot of work to do friends, being informed is just the start…enjoy!

The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want – Emily Bender and Alex Hanna

Artifictional Intelligence: Against Humanity’s Surrender to Computers – Harry Collins

On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? – Emily Bender, Timnit Gebru, Angelina McMillan-Major, Margaret Mitchell

The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity – Parshin Shojaee, Iman Mirzadeh, Keivan Alizadeh, Maxwell Horton, Samy Bengio, Mehrdad Farajtabar

The Pursuit of Fairness in Artificial Intelligence Models: A Survey – Tahsin Kheya, Mohamed Jenek, Sunil Aryal

Against the Commodification of Education – Dagmar Monett, Gilbert Paquet

AI Now Landscape Report – Kate Brennan, Amba Kak, and Dr. Sarah Myers West

Developer Testing in the IDE Patterns Beliefs and Behavior – Beller, Gousios, Panichella, Proksch, Amann, Zaidman

Hiroshima Process International Code of Conduct

Responsible AI: Implement an Ethical Approach in your Organization – Olivia Gambelin 

Full list of my resources here: QR Resources

QR Podcast – Alessandra Moreira

Ale Moreira is one of my oldest friends in the testing world, we got to know each other when we were elected to the Association for Software Testing Board of Directors together and then hung around in NYC while we worked at various companies and consultancies.

She has always been a fantastic hands-on engineer in testing, and we see a LOT in common when it comes to talking to the business about the value of testing. Her newsletter Road Less Tested is on my list of “must read” testing blogs and she co-hosts the Engineering Quality podcast with a couple of her pals, so listen as we talk selling testing to the business, life after testing “death”, and what it takes to lead quality engineering teams in this brave new world…enjoy!

Links from the chat: Association for Software Testing Mindset – Dr Carol Dweck Thinking Fast and Slow – Daniel Kahneman Rethinking Expertise – Harry Collins Leading with Quality – Ministry of Testing The Quality Coach’s Handbook – Anne-Marie Charrett

Keith Klain - QMC

Failure to Launch

Another one for the AI “what could possibly go wrong” file…

From the article, apparently they want the General Services Administration (GSA) “to operate like a software startup, and proposed a whole-of-government, AI-first strategy to automate much of the work done by federal employees today.”

Aside from not even being able to launch properly without leaking their entire plan on GitHub, it’s pretty clear why there has been an all out assault on State-level AI regulation via a proposed 10-year pause of “any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems.”

Further evidence as well that the majority of AI fanboys are saying the quiet part out loud behind closed doors: AI hype is sold as a way of getting rid of all these damn people…

Buckle up my tester friends, it’s going to be a long, hot summer…

Your Regular Reminder

Your regular reminder as apparently, FOMO and euphoria have taken the day:

  • Thinking critically about a technology is NOT being AGAINST it…
  • Considering safety and harm in a technology’s use is NOT being AGAINST it…
  • Being critical of the marketing of a technology is NOT being AGAINST it…
  • Having principles and values drive your adoption of technology is NOT being AGAINST it…
  • Calling out unverifiable claims used misleadingly to sell a technology is NOT being AGAINST it…

As a software testing community, we can do a LOT better than this and frankly, right now people need us to…

EuroSTAR 2025 – Edinburgh

What a week in beautiful Edinburgh for EuroSTAR 2025 – I had a great time with the team from KPMG who sponsored a booth in the Expo for the first time. As in other times in my career, I am so lucky to have passionate testers to work with who care about our craft and community, and the team and EuroStar Conferences was amazing to work with in getting us over the line in time!

The theme this year was “AI on Trial”, and through what I observed and conversations I had over the week, which was an inspired choice as it really feels like we are at an inflection point in the software testing business. René van Veldhuijzen‘s “FOMO Sapiens” could not have been more spot on with the mood right now, as we careen back and forth between “AI-first-all-the-time-bandwagons” and AI skeptics being dismissed with the feverish pitch of tulip speculators!

So what were some of my observations and take aways from the week in Scotland:

  • We need skilled testers now more than ever! To put a fine point on it: IMO the AI in Testing crowd seems to be driven by people who have never been testers, never been responsible for software quality, don’t study testing, and don’t give a damn about testers. I personally or anecdotally observed too many “testing is going away” pitches this week from vendors or “consultants” who look at testing as just a spreadsheet exercise. We’ve been here before with the “continuous everything no code” mania, but this tsunami of BS right now hits different.
  • The testing industry needs to reclaim is ethical responsibility to call out unverifiable claims, wish thinking, and the just general nonsense that’s getting shot into the atmosphere right now. I don’t know if it should be included in a code of conduct or not, but there has to be a line somewhere that has been crossed when you’re making claims that CANNOT be verified. I’ve never advocated for testing to be the project police, but somewhere in the agilification of everything, testing (and testers) lost their primary role of critical thinking and
  • Despite the existential threats to testing, there’s still hope with the folks new to our business. I am fortunate to work with a bunch of young testers at KPMG like Vicky Li, Jess Sarzosa, and Lucy Taylor, but this week I got the opportunity to meet some fantastic testers from Murex. Elissa Tahech gave her first ever conference talk and absolutely killed it – great story telling, funny, and straight to the point. A future star in our industry if she chooses to stick with it…
  • Lastly, I had the unexpected and distinct honor of being given the EuroSTAR Testing Excellence Award, which is one of the highlights of my career. As I said on the night, software testing is a team sport and any success I’ve had in this business has been down to the incredible people I’ve had the privilege to do this work. I strongly believe in leading teams by meeting people where they are, coaching and mentoring by doing together, and not asking your teams to do things you wouldn’t do yourself. I’ve had the honor of watching teams I’ve lead produce a CTO from UC Berkley, 11 Global Heads of QA, 7 Test Directors, 11 Vice Presidents, 2 Global Product Owners, 2 Heads of Engineering and now a COO from companies all over the world. I hope I am lucky enough to continue to work with such amazing people and teams!

All together we had a lot of un and it was a fantastic week filled with great conversations, reuniting with old friends, and making new ones.

Hope to see you in Oslo next year!

CMI Chartered Management Consultant

Very happy to share that I have been awarded Chartered status by the Chartered Management Institute. Thanks to KPMG for sponsoring me and everyone in the Quality Engineering and Testing team for the support!

What Are We Thinking in the Age of AI? – Michael Bolton

This is a great talk from Michael Bolton about how software testers should critically think and talk about AI in testing.

IMO it is our responsibility as testing professionals to cast a (very) sceptical eye on any claim, but even MORE so in the age of AI hype…enjoy!

Here’s my podcast with James Bach he referenced: QR Podcast – James Bach | Quality Remarks Keith Klain

QR Podcast – Santosh Tuppad

These days, your philosophy about testing is more important than ever and a differentiator in the software testing market. Santosh Tuppad has been leading from the front since I first met him as a rising star at Moolya and through his workshops on security testing with me in the Bronx or in Germany with the Afghan Girls Robotics team.

Come take a listen as we talk about the current state of testing skills training in our business, cyber security in the age of AI, and travel far off the beaten path into philosophy, true happiness as a tester, and how many cheese fries is enough…enjoy!

Telling on Yourself

If someone you work with is making the following statement, “We don’t need to hire any more testers, we can do this all with AI!”, I may not know that person, but I can tell you a couple things about them…

– They are a deeply unserious – they clearly have not thought about or researched anything related to AI, software engineering, software quality, software testing or risk management.

– They don’t care about people – I have yet to see this “thinking out loud” BS not being used as a thin veneer to cover the age old question of “how do I get rid of all these people!”.

– They shouldn’t be responsible for anything to do with producing products that impact society – This is the kind of “blue sky” questions that to try to “challenge the status quo” with no regard whatsoever to the impact or harm caused to real people and end up “inventing” busses again.

As before, where the only “shifting left” was into vendors pockets from unsuspecting clients, you’re not dumb – you’re being misled.

And just to get in front of the “youreonlyprotectingyerjobs” AI fanboys – you’re goddamn right I am!

I care a LOT about the people testing software and systems every day, and I can tell immediately if you’ve never had to live with that responsibility.

Do better…

EuroSTAR 2025 – Principles Drive Trust in AI

The following is a post I wrote for the EuroSTAR blog as KPMG UK are going to be at the expo this year up in Scotland…hope to see you there!

Principles Drive Trust in AI

The pace that “artificial intelligence” (AI) is being incorporated into software testing products and services creates immense ethical and technological challenges for an IT industry that’s so far out in front of regulation, they don’t even seem to be playing the same sport.

It’s difficult to keep up with the shifting sands of AI in testing right now, as vendors search for a viable product to sell, and most testing clients I speak to these days haven’t begun incorporating an AI element to their test approach and frankly, the distorted signal coming from the testing business hasn’t helped. What I’m hearing from clients are big concerns around data privacy and security, transparency on models and good evidence, and the ethical issues of using AI in testing.

I’ve spent a good part of my public career in testing talking about risk, how to communicate it to leadership, and what good testing contributes to that process in helping identify threats to your business. So I’m not here to tell you “no” to AI in testing, but talk about how KPMG is trying to manage through the current mania and what we think are the big rocks we need to move to get there with care and at pace.

KPMG AI Trusted Framework

As AI continues to transform the world in which we live—impacting many aspects of everyday life, business and society KPMG has taken the position to help organizations utilize the transformative power of AI, including its ethical and responsible use.

We’ve recognized that adopting AI can introduce complexity and risks that should be addressed clearly and responsibly. We are also committed to upholding ethical standards for AI solutions that align with our values and professional standards, and that foster the trust of people, communities and regulators.

In order to achieve this, we’ve developed the KPMG Trusted AI model as our strategic approach and framework to designing, building, deploying and using AI strategies and solutions in a responsible and ethical manner so we can accelerate value with confidence.

As well, our approach to Trusted AI includes foundational principles that guide our aspirations in this space, demonstrating our commitment to using it responsibly and ethically:

Values-driven

We implement AI as guided by our Values. They are our differentiator and shape a culture that is open, inclusive and operates to the highest ethical standards. Our Values inform our day-to-day behaviours and help us navigate emerging opportunities and challenges.

Human-centric

We prioritize human impact as we deploy AI and recognize the needs of our clients and our people. We are embracing this technology to empower and augment human capabilities — to unleash creativity and improve productivity in a way that allows people to reimagine how they spend their days.

Trustworthy

We will adhere to our principles and the ethical pillars that guide how and why we use AI across its lifecycle. We will strive to ensure our data acquisition, governance and usage practices upholds ethical standards and complies with applicable privacy and data protection regulations, as well as any confidentiality requirements.

KPMG GenAI Testing Framework

The KPMG UK Quality Engineering and Testing practice has adopted the Trusted AI principles as an underpinning model for our work in AI and testing. We are focusing our initial GenAI Testing Framework on specific activities to extend the reach of testers while allowing risk management to be insight led and governance to be human centric. This is accomplished by through incorporating our principles into the architecture including:

Tester Centric Design

The web-hosted front-end is where testers can securely upload documents, manage prompts, and access AI generated test assets to use or modify. Testers can create and modify rules allowing consistent application and increased control of models and responses.

Transparent Orchestration

The orchestration layer sits at the heart of the system and manages the flow of data between different components to ensures seamless execution while providing transparency on the models being deployed.

Secure Services

The Knowledgebase contains the fundamental services powering the AI solution and storing input documents, test assets, and reporting data as well as domain and context specific information you design.

There remains a great deal to be worked out regarding AI in software testing and we are just at the discovery phase of what it can – and should do for system quality. Whatever the future holds, your strategy has to be grounded in principles and values that reflect an ethical approach including putting the tester at the centre of process, transparency of models and data, and safety and security your primary objective.

About the Author

Keith Klain is a Director of Quality Engineering and Testing at KPMG UK and is frequent writer and speaker about the software testing industry.

QR Podcast – James Christie

I first got to know James in 2014 when the Context-Driven community was organising (including your truly) against the rent seeking of ISO 29119 test standard, and as co-chair of CAST for the Association for Software Testing, I invited him to give a talk on Standards – promoting quality or restricting competition?

His blog is a “must read” for any serious software tester, and his clear-eyed work on investigating the Post Office Horizon IT Scandal (including his submission) is the benchmark our community should be striving for when it comes to integrity, ethics, and professional standards.

Listen in as we discuss what went wrong at the Post Office, good evidence, the role of government in technology, the brewing collision between regulation and IT, and how after so long in this business you can keep your teeth sharp! Enjoy!

Links from discussion: What is Good Evidence – Griffin Jones EU fines Apple €500M and Meta €200M for breaking Europe’s digital rules DORA Act EU AI Act UK Murder Prediction US Autism Database

You Broke AI…You Bought AI

Just to be clear about something: if you are selling or advocating for some sort of AI in software testing tool, framework, agent, model, or defect-predict-O-nator and you are NOT focusing primarily on safety, security, or HARM – you are NOT doing your job.

The testing industry has practically abdicated its role in this regard, so it is up to individuals to be the vanguard and it is not panic, job insecurity, or resistance to change driving their valid concerns.

I am seeing some very senior people with lots of influence making IMO/E poor decisions on how to frame where we we are with AI in testing – what it is, what it isn’t, what it can do, and what it shouldn’t.

For even more clarity, if you are dismissing concerns about artificial intelligence in testing, quality engineering, or test automation as any of those things you are partially responsible for what happens when those tools are used to create software and systems that do harm.

Period.

Testing by HUXTR

Love seeing all these AI fanboys in software testing discovering for the first time that testing isn’t about technology – that’s the easy part!

“Wow – I just found out that testing is HUMAN centred activity and that inconvenient, squishy people part can’t be automated away…I’ll call it, TBH (Testing by Humans), or XTH (Experience Tested Humanity), or maybe…HUXTR (Human Uplifted Experience Testing Robot)!”

They sound like those techbro goofs who thought they discovered bus stops, but I guess that’s what happens when you use ChatGPT as a search engine – down the memory hole it goes!

The LLM market must be peaking as we seem to have hit a ceiling on what’s being sold in AI in testing…we live in hype – I mean hope!

MoT TestBash 2025 – Leading with Quality

Very happy to announce I’ll be participating in TestBash 2025 through their Leading with Quality day down in Brighton. Hope to see you there!

A one-day educational experience to help business lead with expanding quality engineering and testing practices.

The quality space is evolving. This presents difficult challenges, with it also comes immense opportunity. The world is changing. Technology is becoming more complex. We need to innovate our ways, by exploring together.

What’s more… Our identity as people who care about quality needs love. We are the believers. We understand that quality does not happen on its own. It’s not just about testing or bugs, it’s about the whole system.

We want to help you lead this change. Together, we can find a pathway forward.

Who is leading the day?

To help us explore the challenges of quality engineering we have four good people leading the day.

  • Nicola Sedgwick: Quality-focussed leader and avid enthusiast of agile ways of working who loves the way technology can enhance and transform the world around us.
  • Ash Hynie:  A technical cultural strategist who propels organizational objectives to better a company’s diversity, inclusion, and belonging practices by enhancing access, equity, and people development programs.
  • Barry Ehigiator: A seasoned professional in software development and testing, who is passionate about empowering individuals and teams to deliver quality software products to their clients.
  • Keith Klain: Director of Quality Engineering leading software quality, automation, process improvement, risk management, and digital transformation initiatives for retail banking and capital markets clients

What can you expect?

  • Four sessions from leaders thinking deeply about what the future of quality software looks like
  • The opportunity to network and build relationships with other leaders in quality
  • Deep-dive conversations on topics that matter to you
  • A space to think strategically and explore day-to-day challenges

Why attend?

  • Contribute to the future of quality
  • Understand the challenges that exist
  • Learn from and with others like you
  • Build a supportive network that lasts
  • Develop the next wave of quality-focused models

Who is it for?

  • Engineering managers
  • Head of Testing/Quality
  • CEOs and CTOs
  • Staff Engineers
  • Team leads

Minority Report…the Sequel?

File another one under “what could possibly go wrong”. From the article:

“Statewatch says data from people not convicted of any criminal offence will be used as part of the project, including personal information about self-harm and details relating to domestic abuse. Officials strongly deny this, insisting only data about people with at least one criminal conviction has been used.”

It will be interesting to watch the gymnastics to try to get the in compliance with the EU AI Act that comes into force in August. The act specifically deals with safety and potential harm through the risks of AI.

“The AI Act introduces a uniform framework across all EU countries, based on a forward-looking definition of AI and a risk-based approach:

Minimal risk: most AI systems such as spam filters and AI-enabled video games face no obligation under the AI Act, but companies can voluntarily adopt additional codes of conduct.

Specific transparency risk: systems like chatbots must clearly inform users that they are interacting with a machine, while certain AI-generated content must be labelled as such.

High risk: high-risk AI systems such as AI-based medical software or AI systems used for recruitment must comply with strict requirements, including risk-mitigation systems, high-quality of data sets, clear user information, human oversight, etc.

Unacceptable risk: for example, AI systems that allow “social scoring” by governments or companies are considered a clear threat to people’s fundamental rights and are therefore banned.”

In what world is this compliant or even remotely ok? We don’t have to accept this dystopian future being forced on us, but all the warnings are just being blown right past. Hope you’re happy AI fanboys, the technology community better wake up before it’s too late, but I’m beginning to feel it already is…

Source material from FOI requests by Statewatch

QR Podcast – Ash Hynie

I’ve been friends with Ash Hynie for over 10 years ever since we started hanging out in NYC working in QA engineering and developer advocacy. She has since gone on to greatness through various consulting gigs and heading up the DEI program at Credit Karma to now founding CountrPT, an AI powered career management platform.

Check out our chat about career development, performative DEI, being a good manager, and a lot about how to look out for yourself in today’s job market…enjoy!

On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? Timnit Gebru Emily Bender Culture is More Than a Mindset Agile Testing Days Ministry of Testing Angela Davis

No, It’s Not You, It’s Definitely Me

A quick message for those struggling with the mudslide of artificial intelligence mania and feeling like they are missing out or question the validity of claims being made by AI fanboys.

It is entirely possible to view AI in software testing critically and not have some form of syndrome the AI fanboy club would suggest is wrong with you. In fact, it is absolutely a requirement of the testing community to look at any claim with scepticism, as IME that is a large part of our job as testers.

Yes, things feel like they are moving very fast right now but I can assure you, no serious person in the software testing industry is dismissing the impact or implications of AI in testing.  And for some practical advice to let you know you’re not crazy, here are some serious folks I am following closely on the frontlines: Dagmar Monett, Timnit Gebru, and Ed Zitron who recently wrote this in his great piece “The Phony Comforts of AI Optimism”:

“Criticism — skepticism — takes a certain degree of bravery, or at least it does so when you make fully-formed arguments.”

The current mood in software testing reminds me of a discussion I had 10 years ago with my friend Pradeep Soundararajan, when someone at the conference we were speaking at asked what it takes to improve testing in a company. “The “Kung Fu Panda” from Moolya raised his hand and said “One thing – Courage!“.

So please bear that in mind when you dive into the endlessly foaming morass of AI fandom right now. For years now, the testing business has abdicated one of its core risk management functions by litigating whether something “should” have been done loooooong after it’s too late. I’m not prepared to make that mistake again…

But for a little fun, here are some signs you might be an AI fanboy…

1. You attack anyone questioning AI as a self-motivated commercial protection racket when in fact YOU are selling AI bots or whatnot…

2. “Testing” in your worldview is the lowest value and most shallow form of testing that’s easiest to automate (but somehow hasn’t been replaced by #testautomation)…

3. You think anyone criticising AI in testing is only afraid of losing their job…

4. Safety, ethics, or harm are never brought into your discussions or quickly dismissed…

5. You think NO ONE critical of AI in testing is doing any serious work despite all evidence to the contrary…

Enjoy and keep testing…!

Rock Bottom?

It’s getting wild out there, folks! I thought we had some way to go before the AI LinkedIn lunatics completely lost the plot, but I read a post today that left me feeling like John Oliver trying to describe the political climate, “Do you see that? Way up there? Way up there above the clouds? That’s ROCK BOTTOM! And we are currently way down here…”

I’m not going to dignify the click-bait by linking and drive traffic to it, but needless to say, the amount of unverified claims, (let alone the unverifiable ones) read like ChatGPT took a shot of hopium before ingesting a WinRunner user manual when being prompted with, “what would happen to software testing if magic was real?”

I had always suspected that everyone in technology “yearned” to get rid of testers, but we now have written proof we’ve finally been “obsoleted”. As I said before, “it is apparent now more than ever in my 20+ years of working in testing that our business has been completely highjacked by people who have never had to answer for or LIVE with the consequences of their bad quality decisions.”

No one who has actually tested software or complex systems would make statements like this. No one.

Experience has taught us that you’ll never go broke underestimating the amount of money people are willing to throw down the testing wish hole, but please, use some critical thinking before believing or buying this stuff. Even the most casual of research would render these “vague claims” unverifiable and “simplistic”, and the stakes these days are just too damn high.

Double Identity Speak

It’s 2025, and I just had a conversation with a global financial services company where we discussed software testing in terms of <gasp> checking and testing, tacit and explicit knowledge, test automation in testing, and the “Weinberg-ian” (is that thing??) principle of human-centric testing…

Guess what? They loved it and said it put testing and what it can (and cannot do), in a perspective they’ve never had before. Go figure…

The point is, if your words work for you and help gain understanding with your testing customers, keep using them…if they don’t then find some new ones. As for me, I’m going to keep on with testing/checking, tacit/explicit, and all the Jerry-isms I can mine…enjoy!

Go Forth and Interview

“Learn from the mistakes of others. You can never live long enough to make them all yourself.” – Groucho Marx

A mistake I see a lot of new (and not so new) managers make is worrying about their team members looking for new work or actively/passively interviewing. IME people leave organisations mostly because of bad management (lack of integrity, abuse, no transparency, low trust), but it’s also really hard to know exactly why they are leaving.

Obviously someone actively looking for a new job is a pretty good heuristic that it’s time to review your operation, but it’s not always a bad thing for either you or the person considering leaving.

I have never been threatened by people testing the market or actively looking for new work, in fact I encourage it – here’s why:

– My people should know their worth if they are going to be happiest and productive in their work. If they are constantly wondering about their street value it’s best for them to go and find out and then we can either address it or they’ll know they are being compensated fairly.

– Having a high trust environment means your folks should be able to talk to you about anything – including leaving. Sometimes if I’m at an impasse trying to help someone through a problem at work, I’ll even offer it as possible solution and encourage them to look in the market.

– IME it’s a very small world and the software testing world is even smaller than most, so building that relationship and trust with someone has only ever helped me when hiring. Sometimes I’ve even helped people prep for interviews or prepare their portfolio of work, which again builds a level of trust that isn’t ordinarily found in the workplace.

I get why a lot of people don’t feel like I do about this and the adage holds true that “if you think it’s expensive to invest in people, wait until you have to hire new ones”, but I’ve had pretty low attrition in my teams over the years and I think this approach plays a part.

What Could Possibly Go Wrong

This is exactly the type of abuse of untested technology that is virtually impossible to unwind once it’s been deployed.

The knock on effect will be felt in communities and systems far beyond its intended use, because if you think this won’t target the most vulnerable and be expanded well beyond this initial brief, you haven’t been paying attention.

From the article: “The effort — which includes AI-assisted reviews of tens of thousands of student visa holders’ social media accounts — marks a dramatic escalation in the U.S. government’s policing of foreign nationals’ conduct and speech.”

Special thanks to all the AI in testing fanboy enablers who equate critical thinking and valid concerns about safety or harm with being a luddite. This is about to get a lot worse…

You Say Fitch Cronin

A couple of weeks ago on my podcast, I was talking to James Bach and we were discussing the language of testing and what I refer to as “word policing”. Well, I think I might have caused some confusion based on the feedback and comments I’ve been getting, so I wanted to state my view.

To be clear, I think imprecision in the language we use to talk about testing is a BIG problem in our industry, and I am completely aligned on this with James (and the CDT/RST community). Where we might have a potential difference in opinion is WHEN to tackle this problem with clients.

If you listen to what I said in the interview (and what I’ve been saying my entire career), it’s that I have not had good experiences with helping people do better testing by STARTING with how they talk about testing. I have found that the culture in some parts of the testing world, the influence of large commodity based testing vendors, and the rot caused by certification rent seekers makes that a very toxic place to begin working with people.

I am also a strong believer in meeting people where they are and having the semantic discussions to understand what people mean by they words they use – not attacking them for not knowing terms of art in a business that’s constantly evolving.

In my opinion, the culture wars in the testing world went into overdrive during the “Testing and Checking” public discussion and has had a knock on effect we’re still feeling today. In reality, (and if you actually read what was written) that debate did a great service to our industry, and personally gave me a new entry for speaking with non-testing folks about automation in testing and tools and how that relates to business risk.

Which brings me to my point about all of this, IME/O it is much more important (as the good Dr Feynman would say) to know “the difference between knowing the name of something and knowing something.” Precision in language and understanding what people mean by the words they use are both equally important when I work with clients, but knowing how to steer those conversations and the value of changing behaviour takes discernment – not brute force.

As my friend Michael Bolton often says, “good luck with your thing”!

QR Podcast – Back Issues

As part of my effort to publish my back catalogue of recordings on the software testing business, I’ve added to the QR Podcast some old stuff including roundtables, conference talks, and guest spots on the odd testing show. It’s been pretty interesting listening to them and hearing how much some of my opinions/approaches have changed and what’s stayed the same. Enjoy!

Note: The quality of the audio on some of these was pretty sketchy, and I’ve tried to fix the sound and limit the background noise as much as possible as the content and participants are extremely good and worth a listen…

ISO 29119 Round Table

Here’s a roundtable discussion I hosted between veterans of the software testing industry (James Christie Michael Bolton, Iain McCowatt, Ilari Aegerter, Griffin Jones) on ISO Standard 29119. James gave his talk at the Association for Software Testing conference in 2014, which started a petition to stop the publication of ISO Standard 29119. Not great sound quality, but…enjoy!

STAR East – The Viability of Context Driven Testing

I gave this interview before my keynote at the 2016 STAR East conference on all the Lessons Learned Selling Software Testing. We also get into why context-driven testing is viable, as well as how to discern between wants and needs of your clients and testers…enjoy!

Screen Testing at TestBash Philly

Had a great time talking about testing and “Rocky” at TestBash Philly in 2017 with the Neil and Dan from Screen Testing! Yo Adrian…enjoy!

Let’s Test Interview

This is a long interview I did in 2014 with Duncan Nisbet for the Let’s Test Conference which was done while I was running the Global Test Center at Barclays. It captures a lot of the day to day of running that global operation and how we went about trying to balance the business demands with improving testing for everyone…enjoy! (Audio gets better about a minute into it…)

AST James Bach Interview

Driving While Driven: The Way of the Skilled Tester – In 2014 when I was the Executive Vice President of the Association for Software Testing, James Bach and I discussed what it means to be context driven, the role of skills in your development, and how you can become part of the CDT community. This is a long and wide ranging discussion about taking responsibility for your testing, the Context-Driven School of testing, and what it means for every tester to be their own methodologist…enjoy!

Testing in the Pub – Making Better Testers

Talking with Stephen and Dan in 2016 about how we can help testers and the testing community to improve and keep learning including promoting and educating people about better testing practices such as Context Driven Testing and the transition to better testing, particularly in the enterprise…enjoy!

Engineering Quality Podcast

As a regular consumer of everything “testing industry”, I’m always sceptical when something new comes out, but I’m really enjoying the Engineering Quality Podcast . I’ve known Alessandra Moreira for years and she’s an expert at this (Royalee Martin and Veronika Pliusnina co-host), so would recommend checking them out.

QR Podcast – James Bach

James Bach and I have had too many conversations about testing to remember in our over 20 years of working together, but we finally found some time to sit down and record one. If you’ve got the time (to listen or watch), in the 2+ hours we reminisce and discuss Rapid Software Testing, the testing industry, AI, context-driven testing, ethics, and training horses! Below are links to some of the different people and stuff we mentioned…enjoy!

Satisfice Rapid Software Testing Never Forgive Them Michael Bolton James Christie Pradeep Soundararajan Association for Software Testing BCS SIGiST Venkatesh Rao Agile Manifesto Enshittification AI Manic Syndrome EU Liability for Defective Products UK and US Refuse to Sign International AI Declaration SIBOS GDPR Consumer Financial Protection Bureau Amish Tech Support Voltaire Paul Holland ISTQB Jerry Weinberg The Secrets of Consulting QR Podcast – Jerry Weinberg SNL Barber Skit Impact of GenAI on Critical Thinking

Diversity, Equity, and Griftopia

As Accenture joins the growing list of tech companies (Meta, Google, Amazon) ditching their #DEI programs, we can see clearly (as the numbers have told us for YEARS) the grift it was the whole time.

As the gutless fold under the slightest provocation, the curtains draw back from the show they were putting on and reveal they never really cared in the first place.

Frankly, those programs never moved the needle in a meaningful way and remind me of something I wrote 10 years ago when they started kicking them off:

“I’ve sat on multiple senior executive boards discussing the progression towards our targets in a room comprised entirely of middle aged, white men. Worse than that, two and three layers deep into the org charts the demographics looked exactly the same – and no amount of target setting is going to change that fact.”

The good thing is, whatever they do organizationally doesn’t affect what WE can do personally, and my commitment is to continue to endorse, sponsor, mentor, promote, and hire from as diverse a community as I can find.

Rant over, now back to work…

The Great Liberation Part I: Software Quality Management  in the Age of AI

“To be in opposition is not to be a nihilist. And there is no decent or charted way of making a living at it. It is something you are, and not something you do.” – Christopher Hitchens, Letters to a Young Contrarian, 2001

You’re not wrong, everything is getting worse…

Sigh…I’ve had my finger on the “publish” button for different versions of my initial thoughts on AI in testing for far too long, but this great post by Maaike Brinkhof finally pushed me to get it out there. This first one is context setting for a series I’m writing about AI, incorporating it into software testing, the dangers and opportunities, and what I see as a general failure of “duty of care” by our industry.

So much great writing and research has been done by Michael Bolton, Harry Collins, Dagmar Monett, (and too many others to mention) I can’t keep up with it all, so getting into the details of the technology and all the unverifiable claims being made doesn’t seem to have a point in my adding to that pile. Instead, I want to focus on the business risks associated with AI in testing, logical approaches as organisations go headfirst into the mist, and what I think are common sense ways to work with these new tools.

In my opinion, the pace at which people are trying to incorporate “artificial intelligence” into software testing is well past ludicrous speed and quickly approaching “plaid”. Even the halcyon days when the odd tool fetish was the only thing compromising testing seem mild compared to today’s rabid AI bell ringing of every vendor, tool jockey, supplier, and LinkedIn lunatic. This foaming at the mouth is despite the immense ethical and technological challenges AI presents for the testing business and an IT industry that is so far out in front of regulation, they don’t even seem to be playing the same sport.

The mask has been off the major tech players for some time as they disband their ethics teams and software testing vendors rush to build “solutions looking for problems” – all by design. I think it is apparent now more than ever in my 20+ years of working in testing that our business has been completely highjacked by people who have never had to answer for or LIVE with the consequences of their bad quality decisions.

I’ve been saying for years that despite all the “advances” in software testing technology, we and our machines aren’t getting better at testing – we are getting dumber and our expectations eroded by bad software and processes. I’ve long believed in the principle of “Enshittification”, but I think Edward Zitron put it better:  

“We, as people, have been trained to accept a kind of digital transience — an inherent knowledge that things will change at random, that the changes may suck, and that we will just have to accept them because that’s how the computer works…”

Well, I think with all this AI sand getting chucked into our eyes, our industry and the general public are about to get their comeuppance after decades of caretaking technology management (no moral hazard in what we unleash on customers), and the vast majority of people I speak to in this business are not prepared for it.

Leopard, meet face…

As someone who as worked in an industry that’s been undervalued, commoditised, downsized, automated, and abused by every wave of “do we finally get to fire all these testers” technology, I would be lying if I said the “they took our jobs” noise coming from some of the development community hasn’t put the occasional grin on my face. But I would take a bucket of salt with the stories that Meta, Salesforce, or whoever is actually trying to replace programmers with AI, as IME, all the hype is being used as an opportunity to cut costs, repeal remote working policies, and trim operational management more than some new wave of engineering resources. But right on trend in the testing world, what we’re presenting as “testing” and “AI” is not that valuable but can also accelerate bad outcomes and increase risk to your business.

It’s a little hard to keep up with the shifting sands of AI in testing right now as vendors frantically search for a viable product to sell, but I don’t see prompt engineering for test automation or anything else in the market right now as anything other than the lowest and least valuable fruit. Companies have been trying to get rid of humans in testing for decades, but outsourcing critical thinking to LLMs is about as closed a loop as you can get and run big transitional risks.

Most of the clients I speak to these days haven’t incorporated an AI element to their test approach and frankly, the distorted signal coming from our business hasn’t helped. What I’m hearing from clients are big concerns around data privacy and security, transparency on models and good evidence, and the ethical issues of using AI in testing.

That gives me some hope that people are thinking critically about AI in testing as with the recent roll back or cancellation of DEI programs, the least represented communities are traditionally the most harmed and need our highest duty of care.

Inside every cynical person is a disappointed idealist…

“If the rise of an all-powerful artificial intelligence is inevitable, well it stands to reason that when they take power, our digital overlords will punish those of us who did not help them get there. Ergo, I would like to be a helpful idiot. Like yourself.” – B Gilfoyle

I’ve spent a good part of my public career in testing talking about risk, how to communicate it to leadership, and what good testing contributes to that process in helping identify threats to your business. So I’m not here to tell you “no”, I’m here to talk about how we manage through the current mania and what I think are the big rocks we need to move to get there and as an inverse –Carlin, maybe find some idealism in my cynicism.

Testing is more than test cases, So as I patiently wait for the regulars to call me a Luddite (sigh), or I’ve only worked with juniors (womp, womp), or that my opinion is purely self-interest in keeping my “manual testing” job (the lack of creativity from some of these folks is astonishing), I’ll move on to the next one in this series: The Great Liberation Part II: The Search for the Real In Software Quality and Testing

Enjoy!

And So We Beat On

Love this on LinkedIn from Michael Bolton where he mentions my interview with Jerry Weinberg in response to the comment: Just because certain visible actions are a part of a process doesn’t mean that they ARE the process. Why is this misunderstanding so common?

His response is fantastic…

“Several years ago, on his Quality Remarks podcast, Keith Klain interviewed Jerry Weinberg. He asked Jerry why some people seemed eager to apply manufacturing models to software testing.

To your question, I’m going to give the reply that Jerry gave: “Because they’re not very smart.”

The people who keep making that mistake simply don’t understand the process. They haven’t even learned to observe and evaluate the visible parts of a process. They don’t consider the tacit knowledge or skills required to perform it.

The problem is compounded by the inability of many people performing the process to describe it accurately and articulately themselves. And THAT’s compounded by the fact that poor observers and evaluators often hire people who aren’t very good at performing or describing the work.

“And so we beat on, boats against the current, borne back ceaselessly into the past.” —F. Scott Fitzgerald

The good news is that people can learn the skills required to comprehend processes, given motivation, time, study, training, practice, and support.”

Here’s the interview with Jerry…enjoy!

Confidently Incorrect

My reaction when I see the TMMi / ISO29119 / testing certification / #yourenotdoingmymodelright crowd doubling down into AI after decades of getting software testing wrong…

Taking Testing Seriously: The Rapid Software Testing Way

Very happy to announce I’ve been honored to contribute a chapter to Taking Testing Seriously: The Rapid Software Testing Way by James Bach of Satisfice and Michael Bolton of Developsense, co-creators of Rapid Software Testing (RST) . Over the course of my career hiring and managing 1000s of testers around the world, RST has been the most valuable training as well as providing a useful language for speaking to management about testing and risk. The chapter outlines a discussion where Michael and I explore my experiences running global teams, the role of testing in managing risk, and what I think it takes to make it as a test manager.

You can pre-order the book for the testers (or anyone) in your life on Amazon now – just in time for the holidays! Enjoy!

Taking Testing Seriously: The Rapid Software Testing Way

Dive into the world of expert software testing with Taking Testing Seriously: The Rapid Software Testing Approach. This book arms software professionals with the knowledge required to master the Rapid Software Testing (RST) methodology. Written by two co-creators of the RST approach and supplemented by material from respected testers who offer valuable insights, it is an essential read for anyone seeking excellence in the craft of testing.

Managing Performance in Test Teams

Ahhhh, it’s that familiar time of year when the smell of Performance Reviews and Goal Setting is in the air! I’ve managed loads of testers over the years and get asked to write a lot of feedback for folks. You’ll hear on LinkedIn the popular advice to take ownership for your career and depend on others as little as possible.

But unfortunately, most corporate performance management processes are fundamentally unfair and biased towards people who have great advocates in the system – people who are speaking about you when you are not in the room. Too often the power dynamics in performance management also tell people to advocate for themselves and then subject them to grading “curves” and corporate structures that aren’t very transparent for you to align you objectives.

My best relationships with people I am “managing” always start with openness and honesty about where we are and what I can do for you. It’s an important relationship and I take it very seriously as people are trusting you for that advocacy. Personally, for me to do my best work I need to know the folks I’m working with share values and principles that we’ll hold ourselves and each other accountable.

For all my Test Managers our there (or anyone responsible for a test team’s performance), here’s a model I’ve used for getting the best out of testers and a lens to look at your objectives/performance…enjoy!

Managing a Team of Testers

STOP: Thinking that the value of the test team is in anyone else’s hands and pretending “maturity” driven test metrics or TPI programmes will make improvements…

START: Telling the team exactly what’s expected of them supported by systematic training of testing skills, test reporting and business alignment…

CONTINUE: Driving out fear of failure by creating an environment that enables innovation and rewards collaboration through strategic objectives and constant feedback…

Performing on a Test Team

1) Honesty

☑ With ourselves and with each other – do not tolerate dishonesty…
☑ Transparency about confronting our strengths and weaknesses…
☑ Self-reflection…

2) Integrity

☑ Learn from mistakes to earn the right to have an opinion…
☑ Provide clear and constant feedback…
☑ Do not lower the bar…

3) Accountability

☑ Take ownership for getting things done – at all levels of the team…
☑ Understand “value” in your business…
☑ Manage your own expectations…

Modern Problems

Integrity doesn’t matter until it does, and frequently testers conflate compromising on what to do with the information found during testing with a compromise on the integrity of the testing. Over the years I’ve seen this cause a LOT of burnout in testers, as particularly on high profile, politically charged projects it can feel like disagreeing with the outcome of a business decision on risk, defect resolution, etc is a personal attack on your role as a tester.

To be clear – I would never compromise on the quality or integrity of the test approach, execution, or reporting on risk, results, or advice on what to do with that information. As my good friend Michael Bolton has said, lying to my clients is not a service I provide, and there are loads of people in this industry who are more than happy to give (and read) “happy path” reports on passing tests.

As we hurdle towards AI driven testing, more than ever we need professional testers with experience in navigating complex systems and communicating risk. It may seem that there are more than enough bad actors willing to tell companies what they want to hear about software testing, but I’m confident (and IME) eventually they’ll come around to the realisation of the real fight – maintaining integrity in our test approach.

EuroSTAR 2024

Keith Klain - QMC

Thanks to the fantastic team at EuroSTAR for the great time and a really well run conference. I had a lot of fun catching up with old friends and meeting a lot of new, enthusiastic testers. It was great to be back in beautiful Stockholm and hope to see you all again soon!

Links to the reference material I spoke about in my keynote can be found HERE: (Rethinking Expertise; How Complex Systems Fail; The Secrets of Consulting)

* Special thanks to my friend Michael Bolton for pulling me out of my conference retirement…the trick worked pal 🙂

The Center Left: Testing Software in the Age of Transformation

Ten years ago, I gave a keynote at EuroSTAR on how to overcome organisational bias against the value of software testing. Despite all the advancement in process and technology, our business still struggles with its value proposition and sense of itself.

Through this talk I’ll discuss the principles and practices I’ve employed to successfully sell testing services and manage high performing teams. I’ll also walk through case studies of what has worked when talking about testing and what it can and cannot do for your business.

What you will learn

  1. What worked & what didn’t when trying to get software testing valued at an organization, and how to fit actionable plans into enterprise transformation
  2. How to help make testing relevant to people by meeting them where they are, & speaking about our business in a way that doesn’t comprise integrity, while still moving things forward
  3. Principles and practices, and their relevant sources that have helped me gain trust and respect from my clients in delivering critical high risk programmes

And here . . . we . . . go!

In another failed chapter in the never ending book of “encouraging good behaviour”, the G7 have apparently agreed to a “code of conduct” for companies building #artificialintelligence systems. Per Reuters, “the voluntary code of conduct will set a landmark for how major countries govern AI, amid privacy concerns and security risks…”, now colour me cynical, but I think we’ve seen how this movie has played out before. You don’t need to expend any energy finding the billions being invested for new generative AI and other AI systems, which is only piled on the billions ALREADY spent on systems actively in use.

At least the EU is giving the appearance of pretending to govern the use of AI on the public unlike their US and Asian counterparts who “have taken a more hands-off approach than the bloc to boost economic growth.” Good grief!

You can read the code of conduct for yourself and I plan to take a closer look, but here are some thoughts at first pass.

The code “urges companies to take appropriate measures to identify, evaluate and mitigate risks across the AI lifecycle” which seems to all happen after the fact. “Organizations should use, as and when appropriate commensurate to the level of risk, AI systems as intended and monitor for vulnerabilities, incidents, emerging risks and misuse after deployment“.

If using the word “risk” is a drinking game, don’t drive after reading this: The risk management policies should be developed in accordance with a risk based approach and apply a risk management framework across the AI lifecycle as appropriate and relevant, to address the range of risks associated with AI systems, and policies should also be regularly updated. What. Does. That. Even. Mean…? Honestly…

Finally, the code seems to just trail off at the end for a subject that probably requires it’s own code: data quality and bias. “Organizations are encouraged to take appropriate measures to manage data quality, including training data and data collection, to mitigate against harmful biases.” IME we haven’t even come close to cleaning up or preventing bias in large data sets and seeing as some pretty big names decimated their #techethics teams I think all the encouraging in the world isn’t going to make much of a difference.

Once again, public governance of critical systems and emergent technology is woefully behind industry which is already over their skis. As we continue to “ride the insane horse towards the burning stable” of using AI, I think it’s well past the time the software testing and quality engineering community get its act together and quit fighting over how to make AI a better unskilled tester…

Hiroshima Process International Code of Conduct for Organizations Developing Advanced AI can be downloaded HERE

Head, Meet Desk

After a long career of reviewing the various outputs from #softwaretesting: strategies, plans, test cases, #testautomation, etc. I’ve never understood why people think conformance to internal/external standards will affect a better outcome. I realise it’s born out of a fundamental misunderstanding of what happens when you test something paired with strong wish thinking that software testing is analogous to manufacturing. But IME, content is key for #testing artefacts and frankly, I’ve been around long enough to see how all these standards usually give a false sense of security and year after year add to the “#softwarequalitymanagement” certification grift.

So I wasn’t that surprised to witness another leg of the race to the #artificialintelligence bottom in testing with talk of creating #AI to check your test strategies, etc for deviations from your internal or external standards. Apparently, all this investment in #ML and #AI is going to be used as some really, really, expensive rube goldberg machine to #automate the lowest value work in testing!

Some day we’ll get something useful for testing from artificial intelligence, but today is not that day…

(old man shouts at cloud rant over 😉)

Inside Every Cynical Person, There is a Disappointed Idealist

“Inside every cynical person, there is a disappointed idealist.” – George Carlin

Had a great time talking about business risk with the Ministry of Testing folks and how your approach to testing is probably introducing risk to your business. Check out the whole interview or clips on my YouTube channel

Enjoy!

MOT AMA on Risk Management

Had a lot of fun ranting at my friend 🐞 Richard Bradshaw on my Ministry of Testing Ask Me Anything. Hope you enjoy it and LMK if you every want to talk software testing and risk management.

Check it out here: Keith Klain AMA on Risk Management

Parts is Parts

“Quality is value to some person”. – Jerry Weinberg

Saw this quote misattributed recently, but Jerry Weinberg threw down the gauntlet in his classic book Quality Software Management: Volume 1, Systems Thinking, and the software testing industry has been wrestling with every word in that short sentence ever since. What is “quality”? How do you determine “value”? Who is that “person”?

Over the years those words have been misquoted, modified, and added to multiple times, but I can’t ever remember adopting any of the new language. There have been many attempts to define quality, mostly trying to wrestle software development into manufacturing models and flow charts which IMO, have added nothing to the public debate.
 
The other canard that gets whipped out frequently by <cough, cough> consultants trying to sell you something, is that software quality is measured by the absence or presence of defects. There’s been loads written and said about the impossibility of defect free software, so my only advice when your hear someone start banging on about that is to watch your wallet!
 
In my business, enterprise tech, the words Jerry chose are loaded with dependencies on context. Sometimes quality means “can we trade?”. Sometimes “value” is an operational cost save we’re trying to meet. Sometimes the “person” is a regulator. But EVERY time we start a project we need to have the semantic discussion of what we actually mean when we are using those words!
 
I might just be getting cynical in my old age, but I think our business could do with a lot of unwinding the “why make it simple when it can be complicated” approach to software quality. I use those word to clarify my approach to testing, to understand what people mean when they talk about software quality, and as a heuristic for ensuring I’m getting as close as I can to delivering the value my testing “client” wants.
 
Might be simple, but it works.
 
As a bonus, I had the honour of interviewing Jerry a few years ago and I think it’s worth a re-listen every once in a while…enjoy!

Lost in Transformation – Interview with Michael Bolton

Last year I had a long conversation with my good friend Michael Bolton for a video he was producing on Digital Transformation for the TiD Conference in Beijing, China in 2022. The whole video is great with an amazing segment with Harry Collins, but my parts are at below…enjoy!

“I’m no longer interested in the outcome . . .”

Inside every cynical person, there is a disappointed idealist. – George Carlin

Eric Schmidt just saying the quiet part out loud now…

Last year I wrote about regulators just throwing up their hands at how outpaced they are by industry as evidenced at #SIBOS and the CyberSecurityCloud Expo, but this is a whole new level of hubris. “There’s no way a non-industry person can understand what is possible…there’s no one in the government who can get it right, but the industry can roughly get it right.”

Wow.

Think about all the harm that will be done in that trade-off to “roughly” get it right in this race to put “#AI” into everything. These folks are bordering on out of control and openly admit their intent to run roughshod over the public good so the government can clean up their mess afterwards with “a regulatory structure around it”…

The #softwarequalitymanagement industry better wake up fast and push back against this as it’s probably too late to change course, but warnings should be issued…

#qualityengineering #qualityassurance #softwaretesting #testmanagement #techethics #ethics #artificialintelligence

A Rising Tide Grifts All Bots

“When the tide goes out, you see who has been swimming naked.” – Warren Buffet

I’ve been reviewing a lot of material for a larger piece I’m writing on the use of #chatgpt and various other “#artificialintelligence” tools in #softwaretesting and I have to say one thing as a preview – the state of what the #testing community views as testing is amazingly poor.

Apparently, to most of the testing vendors I see writing about AI in testing, it’s the equivalent of poking a robot with a stick. Testing is nothing more that test cases, test plans, and reports you can dump into #agile, I mean #jira. People complain about the #quality of #software but then expect nothing more from the people charged with testing than the most shallow, basic checks.

In fact, a testing vendors approach to #ai in testing is now a great heuristic of what you’ll get from them in regards to quality of testing. “We can now automate #exploratorytesting!” “A test strategy can be generated from AI!” “All codeless/scriptless test automation can be done via AI!”

Hear anything like that from a vendor and there’s a strong chance that their view of quality and testing is so basic you’ll spend more money unwinding the contract than any fabricated time saving estimates from all their “#testautomation“.

The numbing effect of decades of crappy software has really taken its toll on people about what is acceptable quality, and the reality is, our expectations have been worn down as opposed to vendors raising their game. And in the end as industry sprints far out ahead of the people charged with protecting the public – real people will likely bear the brunt of this ethical malpractice.

The latest “AI” and #machinelearning noise has only aided the tide rolling out and as the grifters surf the wave, we can see clearly who hopes we won’t notice they aren’t wearing any trunks…

Software Testing Job Insurance

GPT-4 saw all that he had made, and it was very good. And there were no more evenings, and there were no more mornings – just endless days…

“GPT-4 can take a picture of napkin mockup as an input and output a fully functional website (HTML/CSS/JS)”

“Manual” Testing

I worked with a client recently who was frustrated by all the “manual” interventions required to run their “100% fully automated” test suite. Irony aside, as an industry we really need to rethink the amount of sales nonsense we let dominate the public conversations around the value proposition of automation in testing. Terminology in our business is usually polarised and emotionally charged, but I’ve always believed that semantical arguments are worth having even if we can’t agree what to call stuff. Examples like this are just a symptom of a greater problem we have of letting vendors and <shudder> consultants sell us “fully-100%-automated-defect-predictonators” (said in your best Dr. Doofenshmirtz voice) for the last couple decades. We can do better and as well, when these ideas get into the business case, we are just undermining ourselves anyway…

The Importance of Good Resources

I’m doing some work on “documenting” my approach to reviewing software testing operations and I keep coming back to a couple resources. I forget sometimes how much Griffin Jones talk on “What is Good Evidence” and amongst several of his works, James Christies post “Not “right”, but as good as I can do” have influenced my work. They are both brilliant thinkers and contributed a lot to the testing profession…enjoy!

Happy Holidays and 2022 Testers!

Just wanted to send the quality engineering and software testing communities a short holiday note of appreciation for all the work you do.

Testing software is hard. Very hard. And not unlike plumbers, your effort is frequently not appreciated when things are working and first to be criticized when things go wrong.

Years ago I wrote a post trying to define why I like the business of software testing and specifically working with testers and why our work is so difficult.

“Testers spend their days trying to figure out what “might” go wrong by looking for ways a product is already broken – staring into the cosmic abyss of the impossibility of complete testing for all of us takes it toll. All the while competing in an industry teeming with unenlightened vendors, consultants and “experts” undermining their own value proposition by selling “bug free” methodologies, certified super-tester training programs and “automated algorithmic defect predictonators”.”

It doesn’t have to be like this, but it is, and there are lots of us fighting every day to make life better for testers while you make our systems safer, more reliable, and more equitable for the people who use them.

So from me, to you, thank you for all your hard work and know that I see it and appreciate it, even if a lot the world always doesn’t.

Wishing you a happy holiday season and great New Year.

Cheers!

Damian Draws Strangers

Can’t love this enough! Many thanks to Damian Synadinos for the amazing artwork he does at I Draw Strangers. His drawing perfectly captured my “not you again with the same bullshit” look, so check him out and give him a commission to counter all this AI art crap! Cheers!

Keith Klain - QMC

Letting Sleeping Watchdogs Lie

I’m working on a larger piece on #artificalintelligence, testing and ethics, but I had to say something about the really depressing panel on #ethics and #AI I attended at the Cyber Security and Cloud Exp in London today. The conference in general (Cyber Security & Cloud Expo | Technology Conference | London (cybersecuritycloudexpo.com) was a great event and really well done, but I heard two comments from the “Keeping it Ethical in AI” panel that left my mouth hanging open.

The first comment was about the repercussions for companies that are found to be operating unethically with their #artificalintelligence systems which paraphrased amounts to, “it’s better to engage in dialogue about doing better rather than naming and shaming companies”. Eh? These “watchdog” groups have basically no teeth to begin with and the only real one they have is the ability to inform and drive public opinion about bad actors. What else are they going to do?

The second comment really blew me away, at the breath of both the naiveite of where we actually ARE as an industry with AI usage, and the carelessness of the forward-looking view. “Technology advancement should not be inhibited by our inability to keep up with it ethically…” Wow. And this was a panel of people tasked with ensuring AI is used ethically!

There are very serious people like Timnit Gebru and Harry Collins waving red flags around AI about algorithmic bias and the limitations of our technology, and now is not the time to be capitulating before we’ve even started. The real concern here is that the AI horse has already left the ethical barn and these folks are describing the grass! The use of AI in fintech is forecast to grow from ~$14.7b to ~$42b by 2027, so let’s not pretend it’s not already pervasive in our lives.

Artificial Intelligence in Fintech Market: Global Analysis and Growth Forecast to 2027

Anyway, rant over, but if this the level of thinking with the people tasked with looking out for our ethical interests, we might be worse off than I initially though…

Feynman on the Scientific Method

Love this video of Richard Feynman explaining the scientific method…loads of stuff in there for #softwaretesting – Enjoy!

Ethics in Tech

Time to stop blaming everyone else for the ethical problems in tech and refuse to build things that cause harm BEFORE they impact people…

Software Testers Survival Guide: Interview Tips

The secret of life is honesty and fair dealing. If you can fake that, you’ve got it made.Groucho Marx

The hits just keep coming these days, with new tech sector layoffs being announced seemingly on a daily basis. And as we limp into 2023, there are rumors that this is more than just a reaction to the market, but a new era being ushered in for austerity measures for hiring in tech. In light of this, I wanted to get something out to folks on interviewing tips to accompany my advice on how to stand out as a tester during economic downturns. For a more in depth analysis on managing your career in testing, I would point you to stellar works from Benjamin Kelly on and David Greenlees as well what I consider as essential reading on succeeding in tech, Secrets of Consulting by Gerald Weinberg.

I’ve interviewed a LOT in my career as either consultant or candidate looking for job or as a hiring manager. The recruitment process in enterprise tech is fundamentally broken, but I do think there are ways to differentiate yourself and on reflection, there are things I look for that pique my interest in a candidate. Here are my top tips to stand out from the crowd.

Interview Tip: Answer the question – THEN tell your story…

I don’t know if it’s nerves or autopilot kicking in, but I frequently find myself asking interviewees to pause, take a breath and actually answer the question I asked them. There is nothing wrong (and IMO a good habit) with taking 30 seconds to think about the question before giving your answer. There will be plenty of time to support your answer with examples, but make sure you understand what the interviewer is asking and are hitting the nail on the head FIRST before adding details that may not be relevant.

Interview Tip: Understand the commercial implications of testing…

Software testing is part of the business, and that business has implications for operational costs and controls – some we can influence and some we cannot. I am always impressed when a tester demonstrates a commercial awareness of the costs of testing (or not testing) and can speak to real world examples of the cost of quality.

Interview Tip: Relate what you’ve done to a risk the company has faced…

Want to impress someone recruiting testers? Give them an example of how you would test one of their systems or even better, when it’s time for your questions, ask them “How did you test system XYZ, because looking into it, it must have been difficult to simulate…” You get the idea.

I’ve spoken about this for years, but you can also pull the companies financial statements and review for risks you could test around. It shows an initiative that is rare and also demonstrates you get business risk and not just project risk.

Interview Tip: Have a view on the testing industry…

Finally, I always ask testers how they keep up to date on the latest techniques, tools, and thinking about software testing and let me tell you, the answers are usually pretty grim. Our business is relatively young and most of what you’re going to learn about it is through blogs, conferences, and networking. IMO it’s vital to have a good list of resources (mine is here) to draw on and speak to in order to stay current.

These tips won’t always land you the job, but as an experienced interviewer/interviewee they’ve helped me throughout the years and hopefully will give you some ideas on how to get noticed. Enjoy!

All Models Are Wrong, But Some Are Useful

In light of the current state of hiring in tech right now, this talk seems pretty relevant again and hopefully sheds some light on how test teams are often reviewed for performance…enjoy!

“Over the course of my career I’ve reviewed the performance of countless software testing organizations, test teams, and testers looking for ways we can improve. Typically, the first suggestion when asked “how can we improve the state of testing here”, usually relates to something that OTHER people should do. Very few people or teams take an introspective approach to improvement based on their own values and principles. This talk is the review model and heuristics I use to identify things that are working and areas for improvement including efficiency, process improvement, and aligning your test approach for relevance to your business.”

SIBOS 2022: Everything, everywhere, all at once

In Daniel Kwan and Daniel Scheinert’s brilliant film, “Everything, Everywhere, All at Once“, as Michelle Yeoh’s conciseness jumps universes, she’s confronted by a version of her husband who tells her, “When I choose to see the good side of things, I’m not being naive. It is strategic and necessary. It’s how I learned to survive through everything.” As someone who’s career has focussed on software and system quality, I couldn’t help but feel something similar when returning from my trip to Amsterdam last month to attend SIBO

This was my first time attending SIBOS, and for those of you who aren’t familiar with the conference, it’s been THE financial services event since the 70’s and attended by every firm you could think of in banking and #FinTech. If there’s a multiverse, SIBOS is the white-hot core of where we are now and a preview of where we’re heading, and if 2022’s event was any indication, things are getting more complicated at an alarmingly faster pace and hardly anyone is thinking about how any of this is going to get tested.

I attended multiple sessions on how artificial intelligence, crypto, CBDC, and open banking are driving new and interconnected services, and a few even mentioned the “plumbing” for this to happen. Disappointingly, I didn’t hear any talk about stability, quality, or the risks of trying make all this work on existing infrastructure let alone how all this would impact society (other than how great it was all going to be…).

To be fair, I did watch a brilliant panel discussion on “The Future of Money” moderated by Leda Glyptis featuring Eileen Burbridge, Liz Lumley, and Matthiew Favas that was deeply grounded in reality. The panel spoke at length about the need for compassion and inclusion in financial services and the ethical implications of all the change being foisted upon the multiverse.

Unfortunately, that sentiment was not shared throughout the rest of the conference, and I sensed a detachment from reality from the financial environment we are in and the implications for the future. Software and system quality need to be front and centre in these plans and the ethical concerns must be considered BEFORE we start building the “solutions”. My concern, based on my experience, is that when projects face financial headwinds the first thing they cut is testing at the expense of quality and to the detriment to the most vulnerable in our communities.

A real-time example of that financial trade off in the face of difficulty appeared in every talk I attended on ESG referencing the recent decline in investment in environmental, social, and corporate governance funds. There are loads of challenges facing ESG around standardisation, “greenwashing” and accelerating the rate of change to keep pace with climate and societal pressures, but we cannot a deprioritise investment in those initiatives in the face of tightening budgets. My fear is that as these economic realities hit FinTechs, they won’t alter or delay their plans but plough ahead and, in some cases, double down on scope and timelines before the money dries up.

From experience, I can tell you that integration of new technology into legacy banking systems is a frequently an underestimated task and often decommissioning never happens causing complexity risk to increase. Open banking can also carry high risks of fraud, money laundering, and other types of financial crime and have to be considered when testing working in an open API environment. Sharing personal data and banking records, although rightly considered by the standards being implemented, adds to the functional and security requirements when connecting these systems in the real world beyond difficult to model test environments.

Software testing is hard, and releasing quality systems is even more difficult, but the consequences of getting it wrong these days have never been greater. Software is pervasive in our lives and only increasing in depth and breadth and innovation and efficiency are essential to democratizing finance and a building a more just society. It is long past the time that the quality and ethical implications of the systems we build take priority over the functional possibility of constructing everything, everywhere, all at once.

Rhyming’ and Stealin’

Think the state of the software testing industry has improved much? Guess what year this slide was presented about the “challenges” facing testing compared to the 2021 “State of Testing” report. History may not repeat itself, but when it comes to our business, it sure does like to rhyme… 🙃

Software Testers Survival Guide: How to Get Through an Economic Downturn as a Tester

They are stressful, usually unnecessary, and IME always the fault of bad leadership, but from a quick read of the news and what’s happening in the tech sector it, it looks like layoffs are back on the menu. In my over 25 years of managing software testing in enterprise tech, I’ve experienced layoffs in every single one of them. For some folks in tech companies, this may be your first time living through this, and testers are usually the first to go, so here are my quick tips for increasing your value to your business to avoid the cut.

1) Be able to demonstrate your test approach ALIGNMENT to business risk…

Testing that is aligned to business risk (not just project risk) supports more than just technology delivery and is much more valuable and can drive innovation or identify opportunities for investment. Do you know your business strategy? (Have you read it or even know where it is?) Does your test approach address systemic BUSINESS risk, or does it INJECT risk into our business by only focusing on project delivery? Repurposing test management into a strategic role beyond just functional management of testers should answer questions from your CFO about the cost of quality or defects in production. “Managers of managers” are the first thing I look for when optimizing a test team and in a reduction of force, are usually the first to go.

2) Tie efficiencies you’ve made DIRECTLY to operational savings…

Operational budgets are being squeezed right now and testing investment decisions need to be targeted around gaps in coverage related to usage, monitoring, and risk. Headcount is the only currency that management trade in, so look for duplication of effort or redundancy in the test pipelines and be able to show actual savings (not the usual test automation ROI metrics). Non-prioritized test coverage, large regressive test automation, and process inefficiency (meetings, reporting, automation, test management) are target rich environments to cut the delta between what we need and what we are getting and show real cost improvement.

3) Report on INSIGHTS not just data…

The information gathered during testing can and should be focused on business intelligence that can be used beyond release decisions, but too many testers still just regurgitate test case centric reporting. Testing reveals loads of detailed, in-depth information about delivery processes, waste, your business operations, and how you allocate capital, and all you have to do is OBSERVE this information and report on it! Unironically in a downturn, disruptive management and business decisions get made with “out of focus” views on an organization’s delivery capabilities and expectations – testing can help fix that! Test reporting that includes operational insights/intelligence as part of an overall approach to risk management greatly increases the value of your test team as well as reducing mismanagement of expectations.

Unfortunately, in the testing business, we are still subjected to a lot of bad ideas about our craft, its value, and the operational costs of managing quality. I empathize with the people impacted during this industry-wide series of layoffs, but hopefully these tips can help you articulate your value to your organization.

Second Verse, Same as the First

What do Amazon, Apple, Google, Meta, Twitter, Lyft, Stripe, (just to name a few) all have in common? Layoffs.

I’ve worked in enterprise tech for over <coughs> 25 years, and experienced layoffs in every single one of them, and they are always due to poor org structure and over optimistic growth projections. What’s going on with Twitter and the broader tech sector right now has been coming, IMO for at least the last decade (if not longer) – and the blame falls solely on leadership. But as always happens, employees will bear the cost of those bad management decisions at the price of their jobs and security…

IME in enterprise tech, “nice to have” roles, e.g. non-directly revenue attributing roles (advocates, process oriented, governance, etc.) are a heuristic for an org slipping off its revenue mission and will be the first ones to go. You don’t have to like it or agree with me, but I’ve seen it time and time again, and poor org design does not protect these important jobs (and the people who do them) from the firing line.

And to add Twitter insult to injury, in a real life “frog and scorpion” moment, Jack Dorsey clearly knew something bad was going to happen if Elon got his hands on the place, and was warned off by the board. To put a fine point on it: the layoffs at Twitter right now are as much Jack Dorsey and the previous leadership teams fault as they are Elon Musks. (Texts between Jack Dorsey and Elon Musk via Twitter vs Elon Musk, 2022)

What’s happening with Twitter right now has been happening in enterprise tech for as long as I’ve been around and it’s not the fault of the employees – its the result of years of bad management, poor org design, and exuberant hiring that fuels a “boom and bust” operating model. As usual, PE/VC and the founders have already made off with their money and the people doing with work get left holding the bag, so can we PLEASE mark this as the final tombstone for “tech-bro” worship?

Maybe . . . Just Maybe

When someone in the software testing business is warning against over-reliance on test automation, metrics or AI for assessing system or software quality, that doesn’t mean they are against automation in testing or to be treated like a luddite. Maybe….just maybe….you should think about it the same way that people like Timnit Gebru and Harry Collins warn against artificial intelligence – as people with deep experience who see ethical and systemic risk beyond the superficial and immediate returns. #justsaying

SIBOS 2022

Really excited to be attending SIBOS next week in Amsterdam! Come meet me at KPMG UK booth F15 to talk about #qualityengineering and #softwaretesting in the world of #digitalpayments and #fintech. Hope to see you there!

Digital Transformation Podcast

I had the pleasure of discussing all things software testing and how quality engineering can support and accelerate digital transformation with Kevin Craine on his Digital Transformation Podcast you can check out HERE. We covered a lot of topics from test automation to strategies to deal with disruption, so check out the interview and enjoy!

Feedspot – 35 Best Digital Transformation Podcasts

Player FM – Best Digital Transformation Podcasts

Top 8 digital business podcasts to follow in 2022

Dr, Heal Thyself

Performance measurement – the most powerful inhibitor to quality and productivity in the Western world”Robert D. Austin

I’ve reviewed a lot of test teams in the Enterprise Tech world, and I get asked a lot how I go about looking for problems or how to identify opportunities to get more value out of your approach. The following is an attempt to sketch out the model I use to frame those reviews and look for areas for deeper investigation. All models are wrong, but I’ve found this one to be useful, so hopefully you can get some benefit from it too. But first some context…

“No amount of process improvement is going to solve your underlying problem: org dysfunction”

To paraphrase Jerry Weinberg’s Secret of Consulting, “there is always a problem and the problem is always people”. People are the biggest contributor to your context and guess what, they don’t always do what management TELLS them to do, but they frequently do what management SHOWS them what to do. The greatest determining factor in the success of your test approach is how people are comped. People do what they “perceive” the are being compensated for (salary, bonuses, attention, perks, etc.), so my advice if you want to improve your test approach, is start with why tester think they’re getting comped and look for misalignment to your strategy.

People

I always like to start my reviews trying to understand how the testing team or testers approach their work and how deeply they think about it. Behaviors that contribute to a general malaise in your testing team are orgs constantly questioning the value of the testing. The symptoms of teams who have a shallow view of their value are common in outsourced/offshore testing teams (or the dreaded, Testing Center of Excellence!).

Questions I like dig into with teams that have a perceived low value from their business:

  • Can they answer questions about the business/management?
  • Do they only think about what’s directly in front of them – apps/tech?
  • Can they model their user base – demonstrate any interactional expertise?
  • Do they study the testing industry? Trends, conferences, social media – how are they learning their craft?

These teams will be doing the job but just, and typically suffer from all the classics – slow, inefficient, no innovation, lots of buzzword bingo but don’t really understand the business.

Process

In my experience, testing teams that exhibit issues with their process are typically high on fear and low on trust. Unfortunately, I see these symptoms deeply rooted in philosophy and for an organization, they are frequently terminal. Heavy use of FBM (Fear Based Management) techniques like metric scorecards, process flows, swim lane diagrams are prevalent in these operations. Despite all the advances in our ability to understand manage knowledge work, every time I think this approach is dead – I find another company doing it! Hallmarks of testing operations (other than having really new or really old testers) that have trust issues include:

  • Quality police promoting a QA/QC divide
  • Test management offices who “manage the managers” or just color in charts/graphs
  • Negative reactions to agile transitions for “testing services”
  • Prevalence of “wish thinking” – wanting testing to be just about control/confidence

Tester that are obsessed with counting things or have an over reliance on numbers who can’t tell a story through their reporting typically treat testing as a role and not an activity, which can lead to very low trust and value outside of the team.

Technology

The last area I like to dig into to get a feel for the value proposition of a test team is around their use of technology as it relates to their test approach. I’ve long held that there is a segment of the professional testing population that don’t actually like testing! This one can be hard to fish out as testers who don’t like testing are often super enthusiastic about what they do. They often call themselves “embedded testers”, “SDET”s or ”automated testers”. This problem is very common these days due to the current coding obsessed iteration of “let’s get rid of all our testers”, but primarily because great testing is REALLY hard but fake TESTING is really easy. Signs you might have a problem with your test teams approach to technology include:

  • Tool fetishes or tool first approach which views testing as a strictly technological problem to solve
  • Your testers can’t defend their work or explain the mission of their tests
  • Their only approach to deciding what to automated is asking “can” and not “why”

Another good sign your testers might have a very shallow view of quality and testing is a dislike for SEMANTIC arguments, or as Dr. Feynman would call it, the “pleasure of finding things out”. IMO semantic discussions are NOT about micro word corrections but coming to an agreed understanding about what we MEAN with the words we use. Hopefully this gives you some behaviors to look for and some questions to ask to try to unlock more value from your test team and approach. Happy hunting!

Test as Transformation – Expectation Management

“One can’t paint New York as it is, but rather as it is felt.” – Georgia O’Keeffe

The last installment in my series on how testing and quality engineering can support a digital transformation program is about one of my favorite subjects – managing expectations. But is it possible to improve through expectation management? Yes. In my experience through alignment of our ambitions, business capabilities and operating model limitations you can absolutely improve not only the internal/external perception of our business but also your delivery capabilities.

Why is this expectation management such easy target for transformation? First, in my experience there is frequently misalignment between stakeholder perceptions between each other as well as their IT partners in expectations and reality. Very often the business, IT management, and those actually doing the work will have multiple views on what problems are, root causes, and their opinions will be heavily anecdotal in nature and biased. This can cause disruptive management and business decisions made with “out of focus” views on organizations delivery capabilities and expectations.

“Let us accept truth, even when it surprises us and alters our views.”  George Sand

So how can testing help manage expectations?

Testing should help an organization identify if their problems are complicated or complex. Complex problems involve too many unknowns and relationships to reduce to rules and processes and typically, quality is equated to testing and a management problem. In my experience, because of its complexity, software quality can only be managed, not solved and testers should be providing information and insight into business (not project) risks associated with an organizations delivery and engineering capabilities.

We should also be providing clarity on what testing can and CANNOT do and how automation and tooling fit into our approach. Without opening the pandoras box of “testing and checking”, I think from a professional testers perspective or over 20 years in this industry, the discussion on the difference between a “test” and a “check” is incredibly useful. I have employed it on countless occasions to help IT management and my business partners understand the deltas between how we build, test, and use products as in those expectation gaps defects and systemic failures lurk.

One more area to looking into to help your partners manage their expectations is culture and people. Models of success are most likely emulated (money, time, attention, promotion) including bad behaviors like heroics, waste of company assets, and poor risk prioritization/management. How teams are compensated drive most behaviors, so don’t be surprised with you get what you paid for your big, expensive “test process improvement” program!

Testing is typically subjected to a great deal of organizational dysfunction (cost controls, location strategies, 3rd party risks) and has a direct view of the impact to how we build/test/operate our systems. Reporting this information as operational insights/intelligence as part of an overall approach to risk management greatly increases the value of our test team as well as reducing mismanagement of expectations.

Good luck and happy hunting!

Test as Transformation – Optimization

“What we find is that if you have a goal that is very, very far out, and you approach it in little steps, you start to get there faster. Your mind opens up to the possibilities.” Dr. Mae Jemison

Every business wants to be more efficient, but what does that mean in the context of your approach to quality engineering and digital transformation program? The desire for change is greater than ever, with over 80% of companies KPMG recently polled stating they have large transformation programs in place but lack confidence they have the operating model in place to support the required changes. And those transformation plans are only being accelerated since the pandemic! Inefficiencies within and between business operating models or products are great sources to capitalize on unrealized areas for collaboration, reuse, process improvement or reduction in redundancy.

But as testers or those responsible for managing the approach to testing, what and where should we be looking for to increase our value to our business?

Some examples of lenses to view your work that are continually on the optimization radar for enterprise IT are speed to market and your product delivery models. Getting products and services out to your customers at speed through integrated product delivery pipelines and increased automation through increased transparency on risks can help drive out inefficiencies. So how can testing inform the business about opportunities for optimization transformation as well as capital allocation and investment in technology?

Redundancy or Inefficiency? (Dr. heal thyself…)

The first way that testers can look to their work to increase efficiency is at unneeded overlapping effort for similar tasks that are unnecessary or non-productive. Far too often I have reviewed quality engineering or testing approaches that rely heavily on built in redundancy primarily through unprioritized test coverage and large regressive test automation. Paired with the compliment of meetings, reporting, and test management,  the delta between what we need and what we are getting is my go to for my first “target rich” environment for inefficiency.

Loads has been written about systems thinking or seeing “the forest for the trees”,  but in my opinion, “How Complex Systems Fail” by Richard Cook should be required reading for anyone responsible for managing testing or delivery in enterprise IT. Cooks view that systemic failure requires multiple smaller failures and complex systems contain loads of “known or unknown” latent failures by design. Testers should use system thinking and complexity models to look at risks and opportunities through their knowledge of the F2B flows, customer insights, and risk to remove redundancy and waste from their test approach.

Org Dysfunction – Identification and prevention

One of my favorite books on the sources and effects of org dysfunction is “Managing and Measuring Performance in Organizations” by Robert Austin. In his book, Austin talks a lot about applying targets to measures and the effect that has on organizations and how models or processes that are inappropriately targeted that drive organizational dysfunction. In my experience, most testing measures and metrics drive dysfunctional behaviors in teams and yet we continue to utilize them: test case counts, pass/fail ratios, test case efficiency. Along with agile velocity, invalid measures undermine our relationships with customers and our business when we “over promise” or use inaccurate understanding of IT processes to make commitments. The insights provided through testing should increase clarity in our delivery capability, not muddy the waters or create data driven distractions.

These are just a couple examples of how your approach to quality engineering and testing can improve or complicate the optimization of your digital transformation program. Testing reveals loads of detailed, in-depth information about delivery processes, waste, your business operations, and how you allocate capital, and all you have to do is OBSERVE this information and report on it!

How can software testing support business and digital transformation? – Software Testing News

“Testing sits at the heart of the processes required to enable and fast-track business change, so the information it generates can be used to shift how an organisation operates, help redefine its value proposition, or change how it competes in the market. Testing specifically, is closely aligned to business strategy and supports more than just technology delivery. The management information (MI) produced by testing can also be leveraged to drive innovation and identify opportunities for both risk reduction and investment.” Read full article HERE

National Software Testing Conference, UK 2021

It’s been a while, but it sure felt great to get in front of a live audience again and interact with folks at the National Software Testing Conference this week! I had a great time talking about the role of testing in supporting and accelerating business change though digital transformation and disruption. Along with my keynote, the KPMG team presented a talk on the role of test automation in your SaaS pipeline and a workshop on testing challenges in FinTech. It’s a great group of people to work with, and I was particularly proud of my colleague Abdulla Mohammad finally receiving his award for “Test Manager of the Year” from the European Software Testing Awards. I’m working with Abdulla on some machine learning tools for coverage and risk reporting and can tell you, he only has greater things ahead of him!

Aside from being hosted at the amazing location of the British Museum, we got to meet some fantastic testers and see some really interesting content from the UK community. One talk in particular stood out as a topic of passion for me as well being very well presented. Ethics in technology and the role of testers has been a keen interest of mine for some time and was brought to a new head during the pandemic. Samuel Plantie is a lawyer from Outbrain and presented a fantastic talk on data ethics systems using AI and algorithms. The stark contrast between the implications of bias, data breaches and the rights of those having data collected against the overwhelming unpreparedness of organizations using them was jarring.

Overall, it was a great conference and I look forward to easing back into the circuit and meeting and hearing from more testers in the UK and abroad. Hope to see you all soon!

Test as Transformation – Disruption

They sicken at the calm that know the storm. – Dorothy Parker

This is the second installment in my series on how testing can support technology transformation via disruptive external forces impacting both business and operating models. In my experience, transformative testing is closely aligned to the business strategy, and supports more than just technology delivery. Transformative testing should help drive innovation and identifies opportunities for both risk management and investment. Technology adoption – either to innovate or stay competitive

The first example of external disruption that has driving changes in testing and the way we support transformation is in the world of digital payment technology. There is likely to be more innovation in the next 10 years in payment technology as there has been in the last 100. We are very likely see more bundling of the capabilities that are necessary to deliver a seamless experience for digital transactions at the point of sale, online and through mobile platforms (e.g. Fiserv/FirstData). In fact, mobile experience has pulled ahead of branch location as the determining factor for bank selection and in the UK, almost 50% of all bank accounts are digital only in London (roughly double since 2010)!

The second form of external disruption is regulation – or the opportunity cost of staying in business! In the UK, a version of SOX (Sarbanes Oxley for the UK) will be rolling out of the next 2-5 years with wide ranging implications for how you do business. Even though most financial firms in the UK should be familiar with the regulations, as the US SOX requirements for banking should already be in place (US regs post 02 financial crisis – ENRON, WorldCom), this is the first time looking at IT architecture, control testing, control automation, and cultural required to adopt changes. Additionally, per a report by the Financial Times, the nearly £430m costs to UK businesses would come from extending the number to almost 2000 additional companies that fall within the proposed rules!

So how should disruption effect our test approach? Here are some key questions your test approach should be able to answer.

  • “How aligned is our test approach to our business strategy?” Do we know even know our business strategy? (Have we read it or even know where it is?) In my experience, not covering key threats to our business can actually INJECT risk into our business. Identifying risk and opportunities through gaps in coverage can be useful information for your business to take advantage of to respond to changes in the market.
  • But that depends on whether we actually understand our market and how it is changing? This is one of the primary reasons to stay active in the testing industry, so we can see problems our competitors are experience and whether or not we have adequate safeguards or need to address them. Getting in front of threats to our business is a valuable service testers can provide for and greatly increase our value. Seeing how competitors are testing their systems is also a great way to do market research for differences in products and services as well as how to validate regulatory compliance and financial risk.
  • Lastly, how closely is our test approach to aligned to our customer environment beyond technical specifications? How are our customers ACTUALLY using our systems? Do we have operational governance gaps or redundancies that are exposing us to deltas in expectations with our clients? I have often advocated for exploratory testing to go beyond functional system verification and into R&D conducted by testers. Time should be spent understanding defects that uncover systemic risks found in test to get answers for the CFO they’ll like be asked during investor calls!

Hopefully this gives you some food for thought on how testing for disruption translates into information for risk management or business opportunity that should be reported to our stakeholders! Enjoy and happy testing!

QR Podcast – Elizabeth Zagroba

Heya – Finally found time to spill some tea with one of my favorite people in the testing industry, Elizabeth Zagroba. We cover off all the essentials from moving into a technical testing role, idealism, and our favorite musical theater. Check it out HERE… Enjoy!

Friends of Good Software 

Doubt Builds Trust recording 

Doubt Builds Trust blog post (this is all you really need, frankly) 

The Mental Load of One Meeting 

Praise the Messenger 

“What’s past is prologue . . .”

“What’s past is prologue…” – The Tempest (This article is reposted from my LinkedIn)

I was interviewing someone today, and they kept referring to their “non-traditional” path into technology as something that needed to be overcome, and it reminded me of a lot of people in #softwaretesting I’ve known, so I wanted to share some thoughts on this.

1) You don’t owe anyone an explanation for how you got to where you are in life. Period.

2) The tech industry is OVERRUN with likeminded, timid, sheep that will let incentives trump ethics every time. If you took a chance on yourself and busted your ass to break into a new field, IMO those are the EXACT qualities we need in leaders.

3) IME people with engineering or pure CS backgrounds LOVE to over complicate things. Practical experience helps you cut through noise because you have to live with the consequences of your solution.

Finally, as someone who has come into technology through a “non-traditional” path and suffered my share of impostor syndrome – you have everything you need to do this work and as my pal Angie Jones likes to say, “your differences are your superpower”!

Test as Transformation

“What is the answer? she asked, and when no answer came she laughed and said: Then, what is the question?” – Gertrude Stein

How can software testing support business and digital transformation? Unfortunately, that’s not a question that gets asked frequently, as testing has traditionally been viewed as a technology insurance policy – and no one likes thinking about (let alone talking) about insurance! Transformation in the context of a business is a fundamental shift in how it operates, redefining its value proposition, or changing how they compete in the market and in my opinion, testing sits at the center of the information required to support and accelerate business change. Through the course of the next couple posts, I’m going to talk about how testing can help transform your business, but first I want to explain what I mean by “transformative testing”.

In my experience, transformative testing is closely aligned to the business strategy, and supports more than just technology delivery. Transformative testing should help drive innovation and identifies opportunities for both risk management and investment. Transformative testing is also focused on systemic business risk and in tune with markets (both their challenges and changes) and adjusts the testing approach to reflect those factors.

Facilitating technology delivery is important, but only one aspect of a testing objective and as well, is contained within the overall business strategy. Information gathered during testing can – and should – be focused on business intelligence that can be used beyond release decisions. A good way to thing about this is differentiating between project risk vs business risk. Frequently, testing focuses entirely on project risk which MAY represent a risk to the business, but often is just concerned about delivery dates.

So why is taking a transformative approach to testing important? Firstly, it is a quick and relatively easy way to increase the value of your current investment in testing. Providing operational Intelligence to your business is something that testing is best placed to do with our systemic, end to end view of not only our products but about the processes that build them. As well, with operational budgets continually being squeezed, investment decisions need to be targeted around gaps in coverage related to usage, coverage, and risk.

There is also an opportunity with advances in test and monitoring tools for a great deal of functional checking to be automated which done correctly should present opportunities for ADDITIONAL test activities and investment to support or manage the risk of transformation. But this takes care and practice as despite all the marketing, the testing industry (most vendors, consultants, and tool makers) are NOT aligned to business risk management and almost solely concentrated on operational cost reduction. Be careful as an increase in cheap, shallow, flaky checks gives an increasing false sense of security and in my experience, unintentionally builds fragility into systems due to over confidence and risk of systemic failure.

In the next series of posts, I’ll talk about opportunities for testing in regard to business disruption, optimization, and expectation management and how we can support and protect our business during times of transformation. Enjoy!

The Beatings Will Continue Until Morale Improves

I got a few questions about org dysfunction and how it specially relates to software testing, so I figured it would be easier to address them here as sort of a “reply all” to the different channels where I’ve made those comments.

Merriam-Webster defines “dysfunction” as: 1: impaired or abnormal functioning or 2: abnormal or unhealthy interpersonal behavior or interaction within a group. In my experience, organizational dysfunction occurs when leadership have misperceptions about how the organization is designed and actually operates, but more fundamentally, not understanding that the organization is performing EXACTLY as designed.

There are LOADS of different types root causes of organizational dysfunction and just as many root causes. For further reading on org dysfunction an how metrics/management contribute, I would highly suggest Measuring and Managing Performance in Organizations by Robert Austin. But these are the ones I consistently see affecting quality and software testing and further displacing the goals of your teams and company.

1. Competing objectives between teams.

A common practice I see in organizations I consult with are incentivizing teams with competing objectives, specially rewarding feature delivery, code deploys, and defect counts. If you are targeting, measuring, and compensating people in a way that is not harmonized around common goals, you are sure experience silos, information hoarding, and “green shifting” or subconsciously (or in some cases not so much) viewing things better than they are.

2. Failure to adopt new practices.

This is very common in organizations that don’t value innovation as they are only rewarded for ideas that originate with management. People often wonder if I am exaggerating some of the stories I tell about large, global technology teams that seem stuck in 20-30 year old ways of doing things. I assure you I am not, and as well, survivorship bias has a strong hold on a lot of teams that won’t view change as anything other than a threat to personal status and privilege.

3. Heavy process and standards.

Aside from mostly perpetuating the mythology around audits and compliance, organizations that apply heavy handed process and standards typically suffer from the worst cases of CYA. When you don’t trust your people to be responsible for their work, they will resort to making sure their worlds are clearly defined and problems can start to be admired instead of solved.

“I know it sounds like a cat poster…..but it’s true”

I did a talk a while ago about how org dysfunction effects software testing called “All Your QA is Hate You: Software Quality Anti-Patterns in Testing”, so I won’t go into them in great detail here. But if you are trying to do the following things: shift left, centralized testing, decentralize testing, view testing as a “role”​, then your process improvement efforts will probably fall under “Sturgeons Law”.

Theodore Sturgeon’s was an American writer who famously coined “Sturgeon’s law”, which states that ninety percent of everything is crap​. The reasons most tester led TPI doesn’t break the law, is because testers themselves usually aren’t business focused​, are technically obsessed​, and don’t study their craft making them easy marks for vendor schemes and marketing​.

My simple advice to combat the effects of org dysfunction are to look for ways to further align you test strategy to your business strategy​, manage risk, not testing (threats to revenue, realization of business value), and believe it can get better!

Upon Further Review

I tweeted this out the other day in response to watching a project manager get abused by a “senior IT director” over defects being found by clients that they clearly felt should have been “caught by QA”. Apparently this resonated with the testing community as the reactions have been overwhelmingly in support of the sentiment and anti-bullying position.

But with everything else awful about the “internets”, it also brought out of the woodwork the usual agenda driven “agilistas”, “no testers” and “suck it up buttercup” weasels reading into it whatever they already were bringing to party. All that’s fine, but because of the traction it received I thought it was worth having the entire thread and sentiment in one place. Here’s the full tweet thread for context:

https://twitter.com/keithklain/status/1293523088542490624?s=20

I also added the following in an comment on LinkedIn:

No mention of organization structure. No mention of operating model. No mention of methodology. No mention of tools.

I have never employed the “testing firewall” approach to software quality and have always advocated for testing as an activity that can require a full time role. What I dislike about “why didn’t we catch this in QA”, is that it’s classic goal displacement from building quality products to trying to “test” quality into a product. I have also seen it repeatedly used to belittle, shame, and abuse people performing testing into feeling bad about themselves and their contribution to an organization for problems that typically do not originate with them.

Hopefully this give the full context for my thoughts and please read the threads on Twitter and LinkedIn for ways we can do better…enjoy!

QR Podcast – Ethics Panel

Very happy to dusting off the QR Podcast with my pals Fiona Charles, Dan Billing, Ash Coleman, and our returning champion Michael Bolton to discuss ethics in technology, the responsibilities of software testers, and all that “big brother” noise lately about contact tracing apps in the age of COVID-19. Check it out HERE… Enjoy!

Citations

Barnes, Austin. “White House Expected to Endorse KC-Built COVID-19 Exposure Tracking App.” Startland News (blog), March 19, 2020.

“Pandemic Data Could Be Deadly for the Old.” Bloomberg.Com, April 21, 2020.

“The Covid-19 Tracking App Won’t Work.” Bloomberg.Com, April 15, 2020.

Gizmodo. “Trump Admin Gives Coronavirus Tracking Contract to Peter Thiel’s Palantir: Report.” Accessed April 22, 2020.

Tech Ethics and the “Big Ask”

Recently I saw a Twitter thread asking the software testing community for volunteers to work on the SafePaths “contact tracing” app being developed by MIT. This project is made of up ex-Facebook execs, companies with questionable ethical pasts, and vague statements like “a number of leaders and personnel” and “experts from government agencies”. I’m sorry but that isn’t even remotely good enough for a project of this depth and scale.  

Having read the research on the website and associated whitepapers, I feel there is insufficient public evidence of any discussion regarding the ethics of this project. The tech community keeps repeating the same mistakes regarding our responsibility to create ethical platforms – this is an opportunity to do better. Comments from MITs Ramesh Raskar are at once incredibly vague and disturbing regarding what “public” health officials and government agencies can “decide” to do with your data. I encourage you to watch this interview he gave on PBS News Hour.

I understand the need for contact tracing to control the spread of pandemics, but I would need to see evidence of serious, in depth public discussions regarding the ethics of this project – including specifically which government agencies, private entities, and corporations are involved before I volunteered my time. I would also take into consideration the software testing companies already involved in this project (PractiTest, Applause, AppQuality) and what ethical concerns they raised prior to their engagement.

To be clear, I am NOT advocating against an app for contact tracing, but the scale of this project and lack of public information on the entities involved should give pause for thought. Technologists have a long history of jumping into problems before we’ve asked vital questions regarding ethics, and if we’ve learned nothing from our past failures, we should be getting answers BEFORE we start rushing to solutions.

I Don’t Think That Means What You Think it Means (Enterprise Software Testing Metrics)

In my travels as a management consultant focusing on testing and quality in the enterprise, I see a lot of well-intended “symptom treating” in agile/CICD/devops transitions. Recently I’ve been advising one of the biggest mergers in the industry on combining their testing operations and how to “transform” into a leaner model. I haven’t blogged in while, but people have been asking me about some of the workshops I’ve been taking them through with a particular interest in metrics (as usual), so I figured this was as good a topic as any to start writing again.

I spent a couple days with them pulling apart their old metrics scorecard and given my well-document skepticism about trying to “measure quality” you could reasonably call it confirmation bias, but some interesting problems were observed by the team that were driving bad decisions. To start, I took the team through their metrics scorecard using the lens I use for viewing enterprise quality programs, which could be summed up in this question: “Who needs to know what and when to make strategic decisions?” Period. If it doesn’t support answering that question then in my experience there is very little, if any, point in collecting them.

Further to that, I believe that quality is a multi-dimensional attribute that is highly biased and impossible to quantify. Despite all our attempts, I also find it pointless to report software quality in linear or binary statistics in part because it’s heavily reliant on anecdotal evidence. That’s because software systems act like biological networks comprised of complex problems involving too many unknowns and relationships to reduce to rules and processes. It is my experience that software quality can only be managed not solved, and that over-testing a system builds in fragility.

Exhibit A: Really crappy metrics reports…

Despite that, I have yet to see a test management tool or metrics program that isn’t highly misleading if used as designed, which means they are probably disregarded by anyone making a decision other than how many testers to fire. That’s because even today, with all the advances in technology, test tools and development practices, software quality’s primary measure in enterprise tech is still “#passed tests = quality”. This pervasive obsession with counting tests drives massive amounts of dysfunction in teams leading to bad decisions on automation and proposing technical solutions to cultural problems, but mostly provides a false sense of security that we know what our testing is actually doing.

Needless to say, the program I’m reviewing right now was focused on those exact things instead of risks. The goal of testing to provide information to people who need to make decisions for their business had been displaced by distractions. As I took the team through what decisions have been made based off of the current reporting (Release decisions? Value realization? Resource allocation? (people/process/technology)) they realized that there were validity problems with ~90% of their metrics.

We’re currently completing an exercise to model quality within their context, but they now realize it’s a difficult problem to manage not solve. When it comes to reporting on quality and testing our industry can do a LOT better, but in my opinion, success lies somewhere in the nexus of machine learning, visual test models, model-based testing and monitoring. For now, when it comes to reporting on enterprise software quality, a good deal of “I don’t know” would go a long way towards solving the problem, but ultimately we need a way to “see” the problem differently through lenses that more accurately reflect the difficult nature of reporting on quality.

All Your QA Is Hate You: Software Quality Anti-Patterns in Testing

I’m doing a series of talks with QASymphony this fall…here’s the video and abstract  – Enjoy!

All Your QA Is Hate You: Software Quality Anti-Patterns in Testing

From metrics-based micromanagement to the law of triviality, software testing has been subjected to a standard set of responses to testing’s ambiguous, unchartered journey into a sea of bias and experimentation. There are many anti-patterns associated with managing software testing and their effect on quality, from how you organize and fund testing, to the tools you use to automated and manage the process. Through this talk, Keith will explore common solutions to problems regularly encountered by software testing efforts and the unintended consequences, dysfunction, and risks they introduce to your organization.

Culture Is More Than A Mindset – Agile Testing Days with Ash Coleman

What an honor to share the stage with Ash Coleman at Agile Testing Days! Enjoy!

 

QASymphony QualityJam 2018 Roundtable

Check out this awesome panel discussion I’m moderating on “Software Testing in the Real World” at QASymphony‘s QualityJam 2018 in Atlanta with Paul Merrill, Ashley Hunsberger, and Clint Sprauve! Hope to see you there!

2017 Recap: Wait – what just happened?

Looking back on 2017, it’s impossible to list all the wonderful experiences and changes that happened throughout the year. Aside from continuing to build the SQM business with Tekmark and spending time with the good people at TestBash Brighton and Philly, I got to travel to some new places. I was honored to keynote at Romanian Testing Conference and Copenhagen Context, and I would highly recommend both of them for future visits. It might be due to a little burnout, but I am increasingly skeptical of the software testing conference circuit, but RTC and CPC are a breath of fresh air in a crowded field. The community, attendees, and passion of the organizers shine through and I hope to be back some day.

Continuing my partnership with the kind folks at QASymphony, I spoke at both QualityJams, including the new one in the UK. I am excited to be expanding this into 2018 including some webinars and panel discussion, so watch this space! A return to Agile Testing Days was in the cards as well, which included a workshop, keynote and participating in the “Trusted Friends” experiment. A lot has happened this year in regards to sexual harassment and believing women, and our industry has just as far to go in that regard as the rest of society. 

A highlight of 2017 came late in the year at Agile Testing Days. I was privileged to share the stage with Ash Coleman to co-present “Culture is More Than A Mindset” as a keynote before their Women in Agile event. I have presented countless times, but I have never been as nervous as I was that night, and I hope the message I wanted to convey to men around listening, thinking, and sponsoring came through. Ash is an amazing technologist with inspiring stories to tell, so if you haven’t heard her speak yet, sort yourself out in 2018.

I also launched the QR Podcast this year, and in all honesty, it has done better than I could have hoped. I am fortunate to have some amazing friends in this business who have taken the time to talk to me and endure my interview style. The original intent was to document discussion I have with these folks on a regular basis, but due to the popularity of the show, I am going to continue it into 2018 with some great topics including: SAFe, interactional expertise, the psychology of testing, and a bunch more. Stay tuned!

Another great privilege of 2017, was watching the testing community come together to help Kristof Nordstrom daughter through #savingLinnea. My friends and I in NYC had a lot of fun contributing a photo for one of the months in the the Ministry of Testing Calendar to support her treatment. If you can buy one, it goes to a great cause and as well, you get some sweet photos of all the “hottest properties” in the software testing business! Ha!

If you have been following me this year, you know that supporting the Afghan Girls Robotics team has been a passion when I heard about their project. And thanks to all of you and the incredible people at Agile Testing Days, we raised nearly TEN THOUSAND DOLLARS to send the team to Germany! While in Berlin, the team got full access to the conference, tutorials, and networking events. They also got to showcase their project and Roya Mahboob gave an emotional keynote on her work supporting technology communities in developing countries. I am still in awe of all the work that Roya and her Digital Citizen Fund do to support the Afghan tech community and budding technologist all over the world. She is truly an inspiration and I hope to continue working with her to support the team and her other initiatives.

Lastly, after a rough 2016, I decided to throw myself into work, friends, and things I believe in to try to turn 2017 into a year of change. I’m not sure I got there on all fronts, but there are certain people who I don’t think I would have made it without their friendship and support. I have a lot of friends in this business, too many to list, but I cannot express how much Martin Hynie, Santosh Tuppad, and Elizabeth Zagroba have meant to me this year. Whether it was putting up with my rants, picking me up when I’m down, or just sharing a laugh, your friendship means the world to me.

I also have to say a special thank you to Smita Mishra for kicking my ass when I needed it most. Thanks pal!

I’ll be taking an extended break from conferences and whatnot in 2018 to focus on writing, podcasting, and recharging, so I hope to see you all again soon! Cheers!

 

Die Mädchen und der Roboter – Frankfurter Allgemeine

Very humbled to be included with this incredible group of people, but very happy the team is getting the recognition they deserve…

QR Podcast – Test Automation Panel

Really enjoyed this panel session talking about all things test automation with Angie JonesBas DijkstraPaul Grizzaffi, and Ashley Hunsberger. Check us out discussing managing business expectations, what to look for in test automation engineers, planning for maintenance and playing a couple rounds of “Rate that Vendor Claim”. Enjoy!

                                                      

 

Screen Testing Podcast – Rocky Live from TestBash Philly!

Had a great time talking about testing and “Rocky” at TestBash Philly with the guys from Screen Testing! Yo Adrian – enjoy!

Whiz or no whiz? – Talking testing at TestBash Philly

Another trip to Philly, another TestBash in the can. I’ve been to my share of TestBash’s, and am always impressed with what these folks have built up over the years. They feel to me to be more of a community event these days than a technology conference, but maybe that’s a good thing. I’ve toyed with the idea of taking another extended break from testing conferences as it seems to me the explosion of meet ups, hackathons, and conferences has started to fall to Jerry Weinbergs law of raspberry jam – “The wider you spread it, the thinner it gets.” But then again, it may just be that I’m “old and grumpy“. 

Aside from catching up with some dear friends and a trip to Jim’s Steaks (HIGHLY recommended), the highlights of the show included a talk by Ash Coleman. Her relaxed style and nature draw you in and make you want to open up to the possibilities of being vulnerable. I know the theme of her talk was about how she “fell” into testing, but I kept wanting her to go further, to talk about our community and its challenges. We need voices like hers pushing us further than we thought possible and the ways we’ve always done them. I look forward to hearing more from Ash and believe her time in the testing community is something we’ll reflect on for a long time.

Two other talks got me excited and frankly were breaths of fresh air, due to their being technical experience reports. Kim Knup‘s talk on what’s she’s learned performance testing was filled with conventional wisdom and technical details of  how she got the job done. I hope she continues speaking and elaborates on these experiences, as it’s a topic that generally suffers from “tech speak” and her storytelling approach kept it fresh and relevant. I’d never heard Amber Race present before, but wow, she was a dynamo on that stage! Funny, wicked smart, and no bullshit when it came to her message, she got to the heart of the challenges (real and imagined) of the “tester/developer” relationship. Amber is now firmly on my shortlist of speakers I look forward to seeing again!

I also had fun being interviewed about the film “Rocky” by the boys from Screen Testing. We talked about it was relevant to coaching, self discovery, and how we manage differing relationships with competing agendas. It was a great time and I had a lot of fun talking to Dan and Neil even though, I’m still not convinced we shouldn’t have reviewed one of my favorite films “High Society” – a musical derivative of the “Philadelphia Story“…nah, never mind, Rocky was a better choice, maybe next time! 

Finally, I have to include a talk that I need to review again (and possibly again) by Gene Gotimere. His talk outlined tests you might not think for your pipeline, but more importantly, as I said at the time, Gene is a veritable encyclopedia of testing tools. His talk made a typically dry subject practical and accessible, and it’s one of a handful of talks I can see my self referring to in the future. As always, I would encourage you to get a Ministry of Testing “Dojo” account to check out all the recorded talks and tons of other great content.

Cheers!

QR Podcast – Michael Bolton (Part II)

Back for part two of our discussion, is my good friend, Michael Bolton. Michael has been consulting and training people on software testing all over the world and is the co-author with James Bach of Rapid Software Testing. Join us as we talk about training testers, community leadership, and common problems all testers face. Enjoy!

                                                            

Afghan Girls Robotics Team to ATD – You did it!

You did it!

Thanks to all of you and the incredible people at Agile Testing Days, we’ve raised nearly TEN THOUSAND DOLLARS to send the Afghan Girls Robotics team to Germany! The generosity of the ATD organization has met any financial gaps we were unable to raise through the GoFundMe page. As well, the unbelievable people at Sauce Labs and Anne-Marie Charrett made substantial donations to meet the goal. But most importantly, all of you made this possible. Thank you for your generosity, support, and general awesomeness.

While in Berlin, the team will get full access to the conference, tutorials, and networking events. They will have the opportunity to showcase their project and  Roya Mahboob is going to give a keynote on her work supporting technology communities in developing countries. The team will also participate in the Women and Allies Evening Gathering, led by Anne-Marie, where they will get to network with women from all over the world and pair up with mentors to continue their relationships long after the conference is over.

I am still in awe of all the work that Roya and her Digital Citizen Fund do to support the Afghan tech community and budding technologist all over the world. She is truly an inspiration. There is still time to support the team, as any additional funds will be donated to education fund of Team Afghanistan Captain, Fatemah, whose father recently passed away in an attack in Herat. Finally, I would like to thank Jessica Rose, who first brought the team to my attention via social media and has been a source of inspiration and encouragement throughout the entire fundraising efforts.

Thank you for all your support and I look forward to meeting you all and the team in Potsdam!

 

QR Podcast – Michael Bolton (Part I)

What more can I say about Michael Bolton. Michael has been consulting and training people in software testing all over the planet for as long as I’ve been in this business. When he’s not traveling around the world teaching Rapid Software Testing, he’s writing, speaking, and consulting about all things software quality. Check out the first half our our chat as we discuss all things testing! Enjoy!

                                                           

QR Podcast – Rob Lambert

Writing, podcasting, running conferences, keynoting, workshops – is there anything Rob Lambert doesn’t make look “blazingly simple”? Check out the newest QR Podcast where Rob and I catch up on testing, moving into management, his new company Cultivated Management, and how to strike a balance in your career and life. Enjoy!

                                                      

Copenhagen Context – Team CoCo FTW!

Sometimes you just get it right. Now, you can accuse me of bias because I was on the selection committee, but the program at Copenhagen Context was unbelievably good. Hats off to This whole gig was primed by organizer Morten Hougaard of PrettyGoodTesting, who seems to know a thing about running a conference. The entire affair ran seamlessly, and from my observations, I didn’t see even the usual “room problems” or tech mishaps, so well done!

As for Copenhagen, as this was the first visit to Denmark for a semi-jaded Euro-traveller, I was taken aback by how beautiful the city is and how lovely the people were. I had some fantastic meals at Gorilla, a great burger joint, and as well, enjoyed a delicious pint at War Pigs. Denmark is definitely on my short list of places to come back to when I can spend more time! Beautiful!

In regards to the content presented, it was absolutely world-class. Fresh, current, new faces, diverse – everything you want when spending your limited training time at a conference. Highlights for me were talks about testers getting involved in the requirements gathering process by Jyothi Rangaiah and business contexts for testers by Smita Mishra were both filled with practical advice you could start using immediately. And for those who  haven’t caught Elizabeth Zagrobas talk “Succeeding As An Introvert” yet, frankly, it was one my “must see” talks of 2017. I learned a ton about how to work with people who aren’t necessarily wired to deal with my “type A” nonsense, and I think she’s giving it a couple more times this year, so make sure you see it!

As far as the future of software testing is concerned, I’ve said loads that was speculative and hasn’t always come to fruition. But one thing my good friend Martin Hynie has talked to me about for a while now, the Cynefin framework, could have a very big impact on our business. I am still learning about the framework and how it used in decision-making in complex systems, but I am confident the work that Martin and Ben Kelly is out front in applying to software testing and technology in general.

If you haven’t been to Copenhagen Context, I would highly recommend it, and I hope to be back again soon. Thanks again to my colleagues in the program committee, Paul Holland, Duncan Nisbet, and Maria Kedemo, and to all my good friends in the testing community that I care so much about, reminding me to live the hygge life – enjoying the good things in life with good people!

UPDATED – Afghan Girls Robotics Team to ATD

UPDATE

The response has been absolutely amazing, and everyone has been overwhelmed with the generosity of the community. We’ve raised over $5,000 so far, but are not yet at the goal of  $15,000. If you can help in any way, it would be very appreciated and we thank you for your support of this team and initiative. Thank you! – Keith, Roya, and the team at Agile Testing Days

The Afghan tech revolution is being led by Roya Mahboob and her Digital Citizen Fund, and this year she helped an all-girl robotics team from Herat compete in the First Global Challenge . The team had an incredible experience in Washington D.C. at the competition, and now the incredible people at Agile Testing Days have agreed to let the team present their project, participate in the conference, and network with technologist from all over the world.

So now we need your help!

We are raising the funds to send and host the team and coaches in Potsdam, Germany for the conference. As well, any additional funds will be donated to education fund of Team Afghanistan Captain, Fatemah, whose father recently passed away in an attack in Herat. This is a great opportunity to directly impact an important program, diversity in technology, and send a message of love and support to the global technology community.

Donations are being accepted through GoFundMe HERE. Any help you can give would be greatly appreciated!

Thank you!

QR Podcast – Jeff Perkins

Jeff Perkins is the Chief Marketing Officer QASymphony. Prior to that, he’s spent time in senior marketing and sales roles at PGi and AutoTrader.com, and previously worked on some of the biggest brands in the world Volvo, HBO, Sirius Satellite Radio, Michelin, Office Depot and Hasbro. We had a great time talking HERE about the testing tool market, what clients want out of testing, developing your own personal brand, and what’s behind some of those unverifiable vendor claims! Enjoy!

                                                             

QR Podcast – Karen Johnson

Karen Johnson has had a fantastic career working, consulting, and speaking all over the globe about software testing and quality. I always enjoy speaking with her whenever I get the chance and then trying (and failing) to mimic her calm, reasoned approach to deal with problems through storytelling. Listen in HERE as Karen flips the script and asks me some questions as well as discusses the state of the testing business, management consulting, storytelling, her new role at Jamf and what really pisses her off! Enjoy!

                                                            

QASymphony “The Future of Software Testing” Interview

Had fun answering these questions on the “Future of Software Testing” for my pals at QASymphony. You can get the ebook HERE – Enjoy!
The future of software testing is…hard. Doing testing right is very hard. It’s an ambiguous, unchartered journey into a sea of bias and experimentation, but as the famous movie quote goes, “the hard is what makes it great.”
This future will be characterized by…a continued emphasis on shorter delivery times and increased automation in test cycles, which will help further the current trend of testers needing technical and coding skills. Over the long term, as strategy and risk management become increasingly important, I believe design and system thinking combined with great testing skills will be in demand.
The trend to avoid is…the idea of “operational test management,” which has done more to damage the testing industry than any other trend. Offshoring and the commoditization of testing has given rise to “managers of managers” and “scorecard based management,” which removes test management from its primary role of focusing on information about business risk. The popularity of agile practices and the need for co-located teams, has highlighted the inefficiency of this approach, and I think we’ll only see a downward trend in “Test Centers of Excellence.”
Software testing has already changed dramatically due to…the increasing demand for more technical testers due to the perception that testers need to “code,” and I don’t see that ending anytime soon. Having been in the business for a while, in my opinion this has been around for a long time, and what I think we’re seeing now is a market correction in operational models. Testing has been subjected to extreme outsourcing and offshoring to take advantage of the economics of labor arbitrage. Unfortunately, that model does not support being nimble and efficient in getting value to your business, so in my opinion, that approach is on its way out.
The biggest challenge to embracing these changes will be…separating the signal from the noise. There are a lot of “thought leaders” or “experts” who too often have views biased toward very specific practices limited to certain contexts and are given far too much weight. I believe you should research, question and experiment in your own ways and draw conclusions based on that evidence. Of course you should draw on the practical experiences and advice of seasoned professionals, but don’t take what you hear as unquestionable fact.
One piece of advice for testers in the future is to…remember that certain skills are future-proof. In some ways, the best testing skills haven’t changed that much over the years. Critical thinking, communication and interactional expertise are vital to being valuable to your business and managing risk. I would never discourage anyone from learning something new, so being able to code, being current with tools/approaches and attending conferences/meetups are all crucial to staying marketable. But being able to communicate risks to your business never goes out of style.

Afghan Girls Robotics Team to ATD

The Afghan tech revolution is being led by Roya Mahboob and her Digital Citizen Fund, and this year she helped an all-girl robotics team from Herat compete in the First Global Challenge . The team had an incredible experience in Washington D.C. at the competition, and now the incredible people at Agile Testing Days have agreed to let the team present their project, participate in the conference, and network with technologist from all over the world.

So now we need your help!

We are raising the funds to send and host the team and coaches in Potsdam, Germany for the conference. As well, any additional funds will be donated to education fund of Team Afghanistan Captain, Fatemah, whose father recently passed away in an attack in Herat. This is a great opportunity to directly impact an important program, diversity in technology, and send a message of love and support to the global technology community.

Donations are being accepted through GoFundMe HERE. Any help you can give would be greatly appreciated!

Thank you!

Romanian Testing Conference 2017 – vibes & great mood

Enjoy this recap of the Romanian Testing Conference – I had such a great time and it was such a lovely conference. I hope to be back soon! Cheers!

“Standout” Book Contribution on Management Interviews

Here’s my contribution to Ben Kelly‘s great new book on interviewing, “StandoutA career guide to gainful employment as a skilled software tester”. You can read and purchase your own copy HERE…enjoy!

“These days I conduct mostly “management” interviews, so I am looking for potential and cultural fit into our team. Principles I value are honesty, integrity and accountability, so those lines of questions are often meandering and my conclusions are more “gut feel” than straight forward. I like to talk to people about their hobbies and what they do with their down time, as that can tell me a lot about their personality, dedication, and work ethic. Discussing what books people are reading (or not) is another great way to get insight into people. I was once interviewing a candidate with someone on my management team, and I asked the question “what books are you reading right now”. My manager literally said, “what an awful question”, but after the nervous laughter settled down and we let the conversation flow, we learned a tremendous amount about the person – as they did about us! (We ended up hiring them!) One of the biggest mistakes I see candidates make is not being their authentic selves during an interview. What interests me is how you personally contributed to a project (don’t interchange “me” with “we”), what you learned, what mistakes you made, and what you want to know about me and our team. Feel free to laugh, say “I don’t know”, and talk about values and principles, because long after the technical qualifications have been met, those are the things that really matter.”

QR Podcast – Jerry Weinberg


What more can I say about Jerry Weinberg than hasn’t already been said? He’s been consulting and writing for over 50 years, including seminal works like The Psychology of Computer Programming, Perfect Software, and The Secrets of Consulting. In the business of software testing, he has influenced ways of thinking about quality, value, and the role of testing in software development. Check out Jerry and me (and his ringing phone!) HERE discussing leadership, diversity, the state of software testing, and how to remain relevant after 60 years in the business. Enjoy!

                                                        

QR Podcast – Mark Tomlinson

Always enjoy catching up system performance guru, bon vivant, and one of the nicest guys in our business, Mark Tomlinson. Aside from discussing his responsibility for getting this “podcast” interview series started, we have an extending discussion on “thought leaders” and expertise, innovation, ethics, and whats with all those “boring” testing conferences! (FWIW I had to cut over 30 mins of laughing, joking, and cross talk at the beginning and end of this!) Enjoy!

                                                                

QR Podcast – Trish Khoo

Google v Microsoft? Ducks v Fish? Dungeons and Dragons? That means I could only be talking to one person – Trish Khoo! Listen in as we discuss making it as a tester in todays world, mental health, and what to do with all those extra ducks! Check out our chat on the latest QR Podcast! Enjoy!

                                                             

A Brief Comment on “Thought Leaders”

“It doesn’t matter how beautiful your theory is, it doesn’t matter how smart you are. If it doesn’t agree with experiment, it’s wrong.” Richard Feynman

I don’t believe in “thought leaders”. In my opinion based on my experience, too often the views of people biased towards very specific practices limited to certain contexts are given far too much weight. I believe you should research, question, and experiment in your own ways and means and draw your own conclusions based on that evidence. By all means draw on the practical experiences and advice of seasoned professionals, but don’t take what you hear as unquestionable fact.

There is a big difference between respecting a professional opinion and blind faith. People are the biggest contributor to any context, and people are messy, so find what works for you and your team and go with that.

In regards to “thought leaders” brow beating, bullying, discouraging, or otherwise using their position in the community to silence dissent or alternative views – knock it off. The title of “thought leader” is not a mantle to be claimed, it’s offered by a community and in my view carries with it the burden of modeling positive behavior. As well, if your ideas can’t suffer scrutiny or challenge, guess what, you were probably fully of shit and survivorship bias away…

Good luck and don’t stop questioning the “experts” in their own backyard…

Culture Is More Than A Mindset – Agile Testing Days 2017

So honored and excited to be giving this talk at Agile Testing Days this year with my pal, Ash Coleman! Hope to see you there! 

Culture Is More Than A Mindset

How language around work culture can foster an environment of growth and agility.

Changing culture is hard. Organizational values that define culture are deeply rooted in hiring practices, incentive programs, and management frequently based on practices and not shared principles. As well, there is often the illusion that a workplace is an environment built on those principles, but is often framed by what you DON’T say, not necessarily by what you do say. Meaningful change can only come through a clear minded assessment of the state of your industry, business and culture, and begins with a hard look at your own participation in those communities.

Through this talk, Ash and Keith will explore the structural barriers to attracting “like-mindedness” and how the way you advertise your workplace signals the underpinning values of your business. We will talk about what these words are really saying about your business and how you can affect better outcomes. We will talk about ways the testing community can lead by example in “attracting the right fit” and how our language around culture and opportunities at our organizations can either impose discrete limits on our development or can foster an environment for growth and agility.

QR Podcast – Damian Synadinos

Hey everyone! I know you’ve been wondering, “when will there be ANOTHER software testing podcast” – well, the wait is finally over! I’m starting a new interview series hosted on the QR Podcast page and focusing on talking to interesting people in or around our business about stuff I want to know. Hope you enjoy my first episode with Damian Synadinos…cheers!

                                                      

Debugging Your Test Team – QASymphony QualityJam 2017

Had a great time giving this keynote at QASymphony’s QualityJam in Atlanta last month. Check out the other videos HERE…enjoy!

Nevertheless, She Persisted

Hey everyone – This is the editorial I wrote for the latest edition of Women Testers. I took it as a chance to spotlight some influential women I’ve had the pleasure to either work with or learn from. The full edition can be found here. Thanks! – KK

Examples matter. In my experience, behavior modeling is one, if not THE most powerful influences on how we treat people and develop professionally either individually or as an organization. As someone who has led groups of all shapes and sizes and has reported into the highest levels of enterprise tech, I can assure you people are watching. Watching and emulating. That’s why examples matter – but if they are to have a positive impact they have to be seen.

Over the last twenty years I have had the opportunity to work with countless amazing, strong, intelligent women. Through this editorial, I wanted to take the opportunity to shine a light on some people who have had, and continue to have, the biggest impact on me professionally and inspire me personally. They became my examples to emulate by mentoring me, kicking my ass, and just being generally awesome at what they do. As well, as a father of two young boys, it is extremely important for them, so they can see things as they are and how they should be.

So here are some of my heroes – the smart, fierce, funny, and influential women who inspire me to be my best. Enjoy! Continue reading

Quality Jam 2017 Keynote: Debugging Your Test Team

Looking forward to seeing you all at QualityJam in Atlanta!

Elections Matter

Having recently attended and presented a keynote at my first Agile Testing Days in Germany, I wanted to respond to the sad news that Agile Testing Days US has been cancelled “due to the current political situation in the USA.” I had heard the rumors and  as they had invited me to come to Boston and support the  event, I reached out to the conference organizers to see if there was anything I could do to change their minds. Unfortunately there was nothing that could be done to salvage the conference as speakers had already pulled out and attendees were either concerned about getting into the US or had already changed their plans. As well, AgileTD has a very strong commitment to diversity and out of respect to their values a principled stance against discrimination needed to be taken.

As  heartbreaking as this is for me as a testing professional and American, I completely understand and support this decision. Elections matter. What is going on the US political environment and in a broader cultural sense has consequences, and unfortunately I don’t think we’ve seen the last of cancellations like this. Personally, I would have loved to see the conference go on as planned and damn the torpedoes of hate and oppression, but it wasn’t my decision to make. I fully support the organizers of AgileTD and have offered my support in any way for their future events. Best of luck to everyone in these troubling times and stay safe. 

 

QASymphony Webinar Q&A Follow Up

Thanks everyone for attending my webinar with Tekmark and QASymphony – I had a great time doing it and there were so many questions we grouped them into themes. Here are my answers and as usual, these are just like, my opinions. Hope to see you all on the conference circuit soon! Cheers!

Q: Will manual testers still have a job in the market after test automation?

A: As I said in the webinar, in my over 20 years in this business I have never fired a tester because we automated their job away. That being said, if you are not remaining relevant to your company’s culture, meaning; congruent with their technical requirements, providing valuable information, tuned into the products/customers, then yes, you might have something to worry about. At the end of the day, I don’t believe of have seen evidence of  “manual” testers being any more at risk of being made redundant that other roles.

Q: How can be “devops” be beneficial for testers?

A: Any attempt to push testing further up-stream and get a higher quality product out to customers is ultimately a good thing. There are lots of opportunities for tester to increase the scope and coverage of testing in a “devops” environment, but as well, that term has become a bit “buzz-wordy” lately. I’ve seen in enterprise IT shops that “devops” has been interpreted as automated unit tests and release processes which have the ability to change the traditional relationship of “testers as checkers” to more meaningful and deeper testing.

Q: Isn’t rise of DevOps killing concept of having Testing CoE?

A: My opinion is that the Testing CoE is dying under its own operational weight and limited ability to adapt to more modern delivery methods. DevOps might be an accelerator, but some of the archaic aspects of Testing CoEs are more than likely their downfall.

Q: As a tester how do I leverage these market disruptions for my growth and career? What skills or new technologies should I learn?

A: I tend to spend a lot of time talking about communication and articulating your test approach in regards to your business, which in my opinion are both skills that can be practiced and learned. As for good testing practices I would point you to the following from my good friend Michael Bolton:

Q: What would be your suggestions on improving test reporting?

A: Focus less on numbers and counting things and more on qualitative measures. Paul Holland gave a good talk on testing reporting, but as we are trying to tell a story about the multi-dimensional aspects of quality with our reporting, I concentrate our energy on lists of defects, coverage, effort, and risks. 

Q: What are your thoughts on moving testing completely to a 3rd party vs in house TCoE?

A: My first piece of advice when people are planning to build a TCoE is simple: don’t. Now obviously these decisions are made for a lot of reasons (primarily commercial), so I have a series of heuristics I use to evaluate whether a CoE is functioning properly. You can hear more of my thoughts on this in my recent keynote “Lessons Learned in (Selling) Software Testing” here, but in short, here’s the list:

Q: From an agile testing perspective, do you find that clients are interested in testing full agile or hybrid agile-waterfall?

A: My experience is that our clients and prospects in enterprise tech are extremely confused as to what “agile” actually is, despite all the information available. I have done with clients that have termed their approach “wagile” (my personal favorite), especially where they are cobbling together a new digital/mobile strategy into their legacy technology. Personally, I don’t care what approach you use as long as you are “context aware”, meaning you are going into it with eyes wide open focussing on communication, business value, people/culture, and make the mission of testing actionable information. 

Q: Will there be a link to the research paper Keith mentioned earlier?

A: Here you go…

Q: Curious where we can get the book “Software Testing as a Martial Art”. Can’t find it online…

A: Bingo! Here it is…

Q: Can you elaborate on PLATO Testing in Kitchner?

A: From their website…awesome idea, great people…just hire them! 

“At PLATO Testing, we are developing and leveraging a network of Aboriginal software testers across Canada.  PLATO Testing provides outsourced testing solutions to clients throughout North America, with a focus on projects that would have previously been sent offshore.

Established by industry veteran, and PQA Testing founder, Keith McIntosh, PLATO Testing addresses the technology talent shortage in Canada and brings meaningful training and employment to Aboriginal people.

Working with PLATO Testing, whether as a client or as part of the team, makes a positive impact.”

References from the webinar…

QASymphony Tekmark Global Solutions World Quality Report Curtis Sturhenberg ACM Test First Research Paper Richard Bradshaw Chaos Monkey Dan Billing Guardian CD Alex  Handy Romanian Testing Conference Copenhagen Context Test Bash Brighton Software Testing as a Martial Art David Greenlees Scott Berkun Angie Jones Paul Holland Nancy Kelln Smita Mishra Ash Coleman Martin Hynie Vernon Richards Santhosh Tuppad Huib Schoots Alexandra Shladenbeck John Stevenson Trish Khoo Plato Testing Keith McIntosh KWSQA New York Testers Rob Lambert 

“Software Testing Trends for 2017” Webinar

Many thanks to my friends at QASymphony and Tekmark for putting this together! Enjoy!

Q&A with Romanian Testing Conference

Had fun answering this Q&A session from my pal Rob Lambert for the Romanian Testing Conference – hope to see you there! Enjoy! 

Tell us a bit about yourself

I’ve worked in software quality and testing for around 20 years and head up the SQM practice at Tekmark Global Solutions. The greater part of my career has been spent in what I call “enterprise IT”, which are organizations built by multinational companies that use and build a lot of technology, but their core business is not technology. I’ve run large testing programs for Barclays, Citigroup, and UBS primarily in Investment Banking and Wealth Management.

I grew up South of Chicago but have lived and worked in London as well as Singapore and now have an office in New York (when not traveling). Right now I live in Connecticut with my long-suffering wife, 2 sons, and our three cats – Louis, Ella, and Edith.

What’s the best thing that happened to you during a testing conference?

I have made some great friends through all the conferences I’ve attended and it would be pretty difficult to tell all the funny and strange things that have happened over the years. But the best thing that ever happens is seeing people who I’ve mentored or worked with who are now giving talks, workshops or participating in conferences and making their own mark on the industry. The sense of community is a great part of the software testing industry and I always look forward to catching up with people from all over the world who want to give back to our business.

While visiting Romania, what’s that one thing that you want to do before you return?

I’m very excited about this conference as it will be my first visit to Romania, so everything will be a new experience. St. Michaels Church and Central Park in Cluj-Napoca look lovely and as well, I fully expect to check out the local dining (and drinking) scene. 

What’s the most interesting thing about you that we wouldn’t learn from your resume alone?

The first thing people probably didn’t know about me is that during college and for years afterwards I played bass in a couple of jazz bands. I had played multiple instruments since I was a kid, but settled on bass after playing in a touring band in college that gave concerts in prisons. I’ve played at all sorts of jazz clubs and festivals all over Illinois/Chicago working out standards, but now just play at home with my son when he’s practicing his cello.

A fuller head of hair and fitter pair of legs…

Something else people might not know about me is that after my first job in options operations for a financial firm in Chicago, I took over a year off to bum around Summit County, Colorado skiing, back country snow shoeing, and camping all over the Gore Range. It was one of the best times of my life and I saw some beautiful parts of the country including hiking to the 12,777 ft summit of Buffalo Mountain.

What’s your favorite ’90s tune?

Wow, there are lots of bands to choose from that era with the obvious favorites like Nirvana, Cake, and the Red Hot Chili Peppers. The Beastie Boys “Sure Shot” from Ill Communication (1994) and Widespread Panic “Ain’t Life Grand” (1994) really identify with specific periods of my life then, but if I had to pick I would settle on a couple “late” ’90s songs: Ballad of Big Nothing – Elliot Smith (1997) and House Where Nobody Lives – Tom Waits (1999). Probably not very cool, but I’ve never passed myself off as that… 🙂 

How do you get inspiration for your talks?

The great thing about doing this job is that I get to meet lots of testers and people interested in testing which exposes me to all sorts of new ideas. The projects and problems we are trying to solve together are an endless source of inspiration and probably why most of my talks are experience reports. I like to speak directly about what in my opinion and experience has worked for me and more importantly, what have I done that has epically failed. I don’t think you can learn anything without failing often, so I’m a big fan of sharing when I screw up.

Agile Testing Days 2016 – Beer, brats, and brains…

Just back from my first Agile Testing Days in Potsdam, Germany and have to report that I had a really fun time, met some old friends (made some new ones), and participated in some great conversations about software testing. I was there to present my keynote “Lessons Learned in (Selling) Software Testing” about my experiences trying to help large, enterprise tech organizations through agile transitions or various other “test process improvement” initiatives. I had a great time giving the talk and got a lot of good feedback from the conference as well as on Twitter:

Clearly, I owe this person money… 😀

More importantly, there was a great program to attend with some fantastic presentations and workshops from the likes of Stephen Janaway (Emotions in Testing) Santosh Tuppad (Security Testing), Richard Bradshaw (Test Automation), Huib Schoots and Alex Schladebeck (Storytelling) and Maria Kedemo and Ben Kelly (Testability). But probably my favorite sessions of the conference were the “new voices” that the organizers gave the stage to, including a hilarious comparison of Tinder and “agile” by Ida Bohlin, and wonderful talks/workshops by newcomer Anastasia Chicu and the super cool Ash Coleman.

Of particular interest to me was the closing keynote by Gojko Adzic on “Snow White and the 777.777.777 Dwarfs“. Since reading an article in the Harvard Business Review on machine learning, I have been researching and trying to imagine impact on testing from the dropping cost of technology and compute power. Add into the mix machine learning and you have a powerful thesis on what test management will really be about in the future, and Gojkos’ ideas in his talk on risks, exploratory testing, and devops rang true with a lot of my thoughts – stay tuned!

Aside from the very enlightened conversations about the future of testing, AI, machine learning and the state of our industry catching up with old friends like Ilari Aegerter, Mike Talks, and Lalit Bhamare. I also got to meet some new folks who I hope to keep in touch with including my fellow Marxist Grouch(o) Kevin Harris (Hail, Hail Freedonia, Land of the Brave and Free!) Overall, I was very impressed with the conference and the organizers should be very proud of their effort, as it was a pleasure to be involved from planning to travel/accommodations to attending – I hope to be invited back some day! Cheers!

Romanian Testing Conference – 2017

screen-shot-2016-11-19-at-3-58-17-pm

Very excited to be giving a keynote and workshop at the Romanian Testing Conference on May, 10-12th, 2017! The following is the theme and please register here to attend – hope to see you there!

screen-shot-2016-11-19-at-3-56-40-pm

The theme for the conference is “Thriving in testing – remaining relevant“…

With the theme I wanted to address two big ideas that have followed me through my career. 

The first that testing can indeed be a career. A positive a career. A career that can lead to great satisfaction, meaning and personal growth. Testing can be a career and not just a job. It is indeed possible to thrive in your career as a tester, both personally and professionally. 

The second idea is that testing is changing rapidly and the test community is forever changing and evolving. A growing number of teams no longer have testers in them, testers are involved more than ever in the design and requirements stage of delivery, test automation is big business, testing in live is a very real and practiced activity, testers are switching roles to product roles such as Scrum Master and Product Owner and taking with them the testing mindset, and DevOps is still the next big thing. How can testers and testing thrive in this changing world?

The goal of the conference is to cover, address and ask questions about testers and testing and how we can thrive in a changing world.  

Romanian Testing Conference is a chance to celebrate this sense of thriving, to share stories and ideas and to inspire everyone that attends to embrace the uncertainty of the software world. It’s this uncertainty and growth that makes being a tester such a challenging job.

By thriving in the world of testing you also achieve a couple of things. Firstly, you provide epic value to the company you work for and the customers of that product or service. Secondly, you remain relevant to the job market and can enjoy a successful career in an exciting part of the industry.  

So I would like to welcome a call for papers from anyone who feels they are thriving in their world of testing. From anyone who is pushing boundaries, trying new things and leading change. From anyone who is seeing strong personal growth. And from anyone who is trying to teach and inspire others to improve, embrace change and make the most of their careers.  

I’m so excited about the potential to create a seriously fun, fast paced and passionate conference where thriving in testing is at the core of everything we talk about.  

I would love you to be part of it. I would love you to see the amazing country of Romania. And I would love nothing more than shoot the breeze over a few drinks about how you’re thriving in the world of testing.

screen-shot-2016-11-19-at-3-55-56-pm

TESTING IN THE PUB EPISODE 38 – KEITH KLAIN PART 2

screen-shot-2016-10-12-at-5-06-16-pm

TESTING IN THE PUB EPISODE 38 – MAKING BETTER TESTERS WITH KEITH KLAIN – PART 2

Welcome back to episode 38 of Testing In the Pub – Making Better Testers with Keith Klain.

In part two of a two part episode we talk to Keith about how we can help testers and the testing community to improve and keep learning. Keith spends a lot of his time promoting and educating people about better testing practices such as Context Driven Testing and the transition to better testing, particularly in the enterprise. Have a listen as we discuss this and our experiences of older and more traditional ways of working.

TESTING IN THE PUB EPISODE 37 – KEITH KLAIN PART 1

screen-shot-2016-10-12-at-5-06-16-pm

TESTING IN THE PUB EPISODE 37 – MAKING BETTER TESTERS WITH KEITH KLAIN – PART 1

Welcome back to episode 37 of Testing In the Pub – Making Better Testers with Keith Klain.

In part one of a two part episode we talk to Keith about how we can help testers and the testing community to improve and keep learning. Keith spends a lot of his time promoting and educating people about better testing practices such as Context Driven Testing and the transition to better testing, particularly in the enterprise. Have a listen as we discuss this and our experiences of older and more traditional ways of working.

If you like what you hear then you’ll be pleased to know that this is only the first part of our discussion. Check back soon for part 2.

Lessons Learned in (Selling) Software Testing – Star East 2016

Here’s my keynote from Star East 2016 in case you missed it…enjoy!

Agile Testing Days 2016

Very excited to be a part of Agile Testing Days this year in Germany! See you there!

Screen Shot 2016-05-30 at 2.09.57 PM

Lessons Learned in (Selling) Software Testing

In 2013, Keith Klain quit his job as Head of the Global Test Center at Barclays Investment Bank to start a test consulting business based on context-driven and agile testing principles. Since then, Keith has been wading through industry dogma, pitching new ideas about testing to clients, hiring—and firing—testers, and trying to turn context-driven testing into a viable commercial approach. Succeeding in such a setting requires a balance of practical approaches that can driving improvements against “sunk cost” bias and decades of bad behavior by some test vendors and internal test departments. Keith’s successes and failures have validated the lessons he learned during his twenty-year software testing career and have taught him some new lessons he wasn’t expecting. Join Keith as he shares what has and hasn’t worked when talking to stakeholders about what they need vs. what they want, applying context-driven testing principles on projects that haven’t had any principles, and dealing with test case allergies and the “smarty pants syndrome.” Take back new insights in how to get things done without compromising your integrity.

Guest View: If your metrics don’t add business value, stop collecting them (SD Times)

Keith Klain kicked off STAREAST 2016 last week, and there was one line in his keynote that stuck with me throughout the entire conference: “If you can’t draw a straight line between your business objectives and your test approach, you’re doing it wrong.”

As I started to think of all of the little activities that make up part of my workday that do nothing to increase the happiness of Skytap’s customers or reaching our business objectives, Klain’s statement sent me into a bit of a panic.

Read more on SD Times

Dr. StrangeCareer: or How I Learned to Stop Worrying and Love the Software Testing Industry

Enjoy this in the April 2016 issue of Testing Trapeze!

“Find a problem you care about and focus on fixing that.” – Scott Berkun

Software testing is a strange business. It’s commoditized, devalued, misunderstood, and goes through cycles of being chopped, changed, and lives at the front lines of imminent takeover by our robot overlords. Why anyone would want to be a professional software tester is even harder to understand.

After over 20 years in this business, I’ve seen people from all stripes and walks of life wander in and out of this industry, but the ones that stick with it all have one thing in common: they are slightly nuts. Sure, they might seem sane – they are probably well read, hold a job, support a family – all the makings of normality, but inside, some part of them is just a little bit crazy.

Frankly, you’d have to be crazy to do this for a living! Testers spend their days trying to figure out what “might” go wrong by looking for ways a product is already broken – staring into the cosmic abyss of the impossibility of complete testing for all of us takes it toll. All the while competing in an industry teeming with unenlightened vendors, consultants and “experts” undermining their own value proposition by selling “bug free” methodologies, certified super-tester training programs and “automated algorithmic defect predictonators”.

Screen Shot 2016-04-14 at 5.11.37 PM

Let’s go to a testing conference!

Further to that, a large part of our business is filled with people who lack any intellectual curiosity about our craft. Jerry Weinberg famously said “a tester is someone who knows things can be different”, but in my opinion, that seems to have been interpreted as a sirens call for every “different” person in technology who couldn’t fit in anywhere else. I have frequently used the analogy that most enterprise “QA” departments are created by tipping the building on its side and drawing a line around everything that’s loose and rolls down to one end. On top of that the desire to farm out software testing to the lowest bidder has been accelerated through outsourcing, offshoring, and labor arbitrage and greased the slide to the bottom of the talent pool.

All this used to bother me.

Every couple of years or so, I seriously consider getting out of the software testing business altogether. I get frustrated by the industry and the same tired ideas being blasted into the echo chamber by a new crop of “thought leaders”. Nothing ever seems to change in our business. I have a slide in one of my presentations that I keep waiting to be irrelevant, but for years now its just as indicative of attitudes about testing as it was when it was first written.

Screen Shot 2016-04-14 at 2.22.53 PM

As relevant as ever…

Now I’m not saying I’ve stopped caring about professionalism in testing and managing it like a skilled craft, or all the other improvements I’ve been talking about for decades that we can make in our business. But something happened recently that made me rethink my attitude towards testing, our industry, and its place in technology. QASymphony just held its annual users conference and made the wise decision to have Scott Berkun give the keynote on his book “The Myths of Innovation”. Scott is a pretty impressive guy and he’s not a software tester, so I was very interested in the parallels he would draw between innovation and our business. I’m not going to go into every detail of what Scott spoke about (I would highly recommend you read the book and watch the video), but one thing stood out when he was talking about explorers.

Why is an explorers’ life always difficult? Because they are an explorer! Why did Magellan die while trying to circumnavigate the globe? Because he was trying to circumnavigate the globe in the 16th century, dummy! Why is a testers life so hard? Because testing is hard! And to be really good at it – to choose it as a career – is going to take a person that likes that type of challenge and is well, probably a little bit nuts. People also like to believe simple narratives that tie a bow around difficult ideas and ignore the hard work of experimentation and failure essential to discovery. That explains why our industry is filled with lots of bad actors making all our lives harder, like modern day astrologist charting easy and mystical answers to complex problems. And that also explains why it’s not going to change.

Screen Shot 2016-04-14 at 5.12.56 PM

I once found a hole in your logic THIS BIG!

That my friends, is when the Buddha started laughing. I’m never going to stop working to improve the state of software testing, but I’ve been wrong about why it’s constantly in a state of repair. It’s not because testing is the lowest totem on the pole that we have these problems – it’s because it’s so difficult to do right that people are willing to believe there are shortcuts to success. The very nature of what we are trying to do is going to attract difficult people and snake oil salesmen selling star maps. Testing is hard. Doing it right is very hard. An ambiguous, unchartered journey into a sea of bias and experimentation, but as the line from the movie goes, “the hard is what makes it great”.

And that’s why we love it.

The Viability of Context-Driven Testing: An Interview with Keith Klain

Summary: 

In this interview, Keith Klain, a software testing and quality management professional, discusses all the lessons he’s learned from selling software testing. He also explains why context-driven testing is viable, as well as how to discern between wants and needs.

Josiah Renaudin: Welcome back to another TechWell interview. Today I am joined by Keith Klain, a software testing and quality management professional, and a keynote speaker at our upcoming STAR EAST Conference. Keith, thank you very much for joining us today.

Keith Klain: Great to be here. Thank you very much.

Josiah Renaudin: Absolutely. Before we really dig into the meat of your keynote, can you tell us a bit about your experience in the industry?

Keith Klain: Sure. I’m going on twenty-plus years experience in software testing and quality management. I’ve worked at large financial services institutions all over the planet; did a long run in London, Southeast Asia, I’ve worked all over India, and I’ve been back in the US for about four and a half years now running large global testing operations and consulting businesses.

Josiah Renaudin: How difficult was it to leave such a very certain thing to start something on your own? I mean, you look at a title like “head of global test center,” that’s a big thing. What was the process in your head when you were leaving that?

Keith Klain: Right. Yeah, it’s scary. I think that’s one of the things I talk about during my keynote: what you think you know, and then what you really know once you leave a job like that. Those large, I call them, “enterprise IT jobs” are very dulling of the senses because you get used to, and very, I view it in my frank way of putting it, “soft” from your ideas being accepted because it’s generally in people’s best interests to accept ideas.

I learned a lot about what works, what doesn’t work, in the real world by doing that. But to answer your question specifically, yeah, it’s a leap of faith. I mean, I’ve been aligned towards the context-driven testing school of thought for a while now. I really believe that it’s got the most commercially viable and best ideas on how to test software, and truly believe that it can be a commercial success, and not a lot of companies in the world have tried to do it that way. So yeah, you try and bank on yourself, but it’s a scary venture.

Josiah Renaudin: Yeah, well, it sounds like betting on yourself worked out in this case. You just mentioned context-driven testing. That’s a great segue for me because, in your mind, what do you feel are the best methods for turning context-driven testing into a practical commercial approach? And what have you really done recently to move in this direction?

Keith Klain: I think, particularly when it comes to context-driven testing, the context-aware test strategy and context-aware information that aligns your test approach towards relevant information for your business is really the most important thing that comes out of the context-driven testing world. This is kind of paired with the skilled testing movement as well; that testing is a set of skills that can be learned and practiced and developed. Those two things, I view them as related but are different in a way.

We tend to focus on the context-driven testing world, I think, a lot towards skilled testing. A lot of the people who are founders of the movement are artists and testers. The Michael Boltons, the James Bachs of the world are artists and testers. People who follow them tend to focus on the skill side to it, but there’s a lot of information and great stuff that comes out of context-driven testing that’s completely relevant to the commercial prospect of helping a business be successful, or at least make great decisions based on great information.

One of the things that I’ve tried to do in making it a commercially viable proposition is aligning test missions to the business as quickly and iteratively as possible. That’s what being context-aware is really about. We focus a lot on what do we need to know, who needs to know, when do we need to get that information to them, and using context-driven principles and skilled testing practices to generate that rather quickly.

That’s what the primary … The alternate world that a lot of the bad practices in software testing have been developed out of this idea of a factory mentality towards testing is really they antithesis to that. That’s where we can, again trying to carve out some of that market for us, make a very reasoned and demonstrable difference between our competitors. That’s what I’ve really tried to focus on in selling this to businesses, and using the lessons I’ve learned in selling it to Barclay’s and other organizations as well.

Josiah Renaudin: You have to have learned a lot at this point. Like you said, you have more than twenty-years of experience and you’ve worked with a medley of different businesses. How often do you run into stakeholders who have a very specific need that’s really just diametrically opposed to what they want? How do you handle these situations without pushing a client away, this kind of strange gap between what you see needs to be done and what they think needs to be done?

Keith Klain: Well I think there’s the presumption of, “I know what somebody needs and wants from the start,” is again where I think you run into these tendencies of wanting to apply models again and again. That’s where this factory mentality comes into quality models or maturity: that we can replicate the same idea in any context. Being of the context-driven stripe, I would come in saying … I need to ask a lot of questions to find out where they want and what they need, and help them align that. I’ve quite frequently run into stakeholders that … They need help. They don’t know how to articulate in the language that’s commercially available of software testing, so test cases, and test counts, and all the metrics, and a bad test strategy, and all the other things … They don’t know how that helps them get good information, so they’re trying to navigate that and not getting what they want or really what they need.

I run into that quite frequently because they’ve had decades and decades of being blasted by bad information from the software testing business. Some of that to me is aligning language and goals with what actual testing artifacts, deliverables, objectives should be about; once we help get that decoder ring in place we’re going to say, “Okay, well here’s what you really are looking for,” and I’ve run into this quite frequently actually, then we can figure out how we structure our test approach to get you what you need. What they think they want typically is aligned towards this kind of weird testing industry language, but doesn’t always translate into what they actually need; particularly at the approach and strategy level, we focus that.

That happens more frequently than not; particularly when you’re working with large enterprise organizations. These are non-technology companies that build tech: banks, insurance companies, telcos, those types of industries.

Josiah Renaudin: How difficult can that be, when you run into these large industries that have such deeply ingrained bad practices due to persisting mistakes or just poor testing principles? I know each company’s different, each situation is different, but how do you help change their mindset and institute these practices that you know work? Like you said again, you don’t want to come in with this almighty, “I know everything. I know the solution to your problems,” but is there a first step you have towards trying to remove these mindsets?

Keith Klain: Well yeah. I think that first step is something that a lot of context-driven testers, or people who follow the school, miss, which is becoming context-aware … Which would seem to be obvious, but a lot of times it’s missed. That’s really trying to understand, “What is going to work in this environment? What’s going to work in this context?” Your context, the biggest contributor to context, it’s people. Understanding who the people are, what’s going to work there, and what’s not, is the first step really. You’re not going to have a hundred percent success rate on every single environment, but defining what success looks like. You’re going to to push an organization a certain part, they’re going to have to take it the rest of the way; so understanding that environment and being attuned to it is really important, because not all organizations are going to be able to adapt new things. You have to find out how far you can push a place.

It’s funny because I’ve worked at UBS, Citigroup, Barclay’s … some fairly large organizations. I always had this feeling whenever I left there that … Even a place like Barclay’s where we had a great deal of success in implementing context-driven testing, I always wished I would have pushed harder, faster; we didn’t get as far as I really wanted to go. There’s stories that I’ve heard out of some of these organizations that I’ve left that the impact is still being felt.

Also, I think there’s an ability to propagate things after you’ve left. There’s people who have started communities in Asia that used to work for me that are now working in new organizations that I built. I think it’s a bit of a movement but leveling as best you can like me. You’ll hear me say again and again when I talk: “Managing your own expectations.” I should probably get that put on a T-shirt; it’s a bit of a motto for me. You know? Just making sure you’re managing your expectations about what you’re actually going to be able to get done.

Josiah Renaudin: Now I’m expecting you to wear a hat during this presentation that says “Managing your expectations.” That would be fantastic.

Keith Klain: That is my phrase, actually. I actually should get that copyrighted.

Josiah Renaudin: You really should. So there were two terms that stuck out when I was reading your abstract and that is “test case allergies” and “smartypants syndrome.” If you don’t mind, could you give a brief explanation of these two things so we can have at least a little bit of an understanding before the keynote?

Keith Klain: Sure. One of the things that I find … And that’s really about the kind of pedantic use of language, particularly around test case allergies. There’s nothing wrong with getting to the meaning of words, but I’ve been involved in conversations with people and testers who can’t stand the use of the word “QA,” when organizations use, or people use, the word “QA” instead of “testing,” where literally the two people are going back and forth saying, “Well, yes, we QA’ed that the other day.” And then the person responds, “Well, did you test it too?” And they say, “Yes, we QA’ed it last week.” And they say, “Okay, so when you tested it …” and just literally this kind of … I guess today you’d call it microagressive correcting of people’s language.

Look, I understand the need for clarity of language and wanting to know what people mean, but I also tie this into, if you think you can help people test better, then a good analogy for how you should approach your job is a doctor analogy: You’re not a GP, you’re more like an ER surgeon. People aren’t coming in for well visits. If they didn’t need help, they wouldn’t have asked you there. So let’s not beat people up first over language; let’s help solve some bigger problems. If they weren’t sick, they wouldn’t have asked you for help.

Berating people because they don’t use the right language can be very irritating, and I think that leads into what smartypants syndrome is about. There’s a lot of folks who I’ve called out on this. Also new people in the context-driven testing community that James Bach refers to as “the tiger cub problem,” where you’ve taught people new skills and how to use them, and how to use language better, and a whole bunch of new techniques, but they’re like a tiger cub just feeling its claws. They go around tearing the entire house up, which, again, isn’t entirely helpful. Knowing how to use things appropriately, and where, is super important, and people can come off like smartypants. Nothing will prevent someone from allowing you to help them quicker than them feeling like you’re a smartypants.

Josiah Renaudin: Yeah. I don’t want to give away your entire keynote, but more than anything, just kind of to sum this up a bit, what central message are you hoping to leave with your keynote audience?

Keith Klain: I tie my talks into kind of experience reports, so it’s hopefully drawing some lessons from my failures and lessons that I’ve learned personally on that … And that there’s four key practices, or I guess experiences that I want people to draw from and that’s how to align what you do towards the business; my views on what good test management looks like; how to integrate a software testing organization into the larger corporate culture in an organization; and then what I think are the characteristics of a good software tester … Someone who is capable of doing the hard work of providing timely, relevant information to the business in a regular way, and what I think are the characteristics and traits, through my twenty years of software testers that have ability.

Josiah Renaudin: Fantastic. Well, thank you very much, Keith. I appreciate your speaking with us today. I’m looking forward to hearing the full talk at STAR EAST.

Keith Klain: You got it. We’ll talk to you soon.

Software Testing as a Martial Art – by David Greenlees

I recently had the privilege of writing the foreword to my good friend David Greenlees new book, “Software Testing as a Martial Art”. I encourage you all the buy the book on Leanpub HERE and spread the word for anyone looking for some great insights into the world of software testing. Here is what I had to say about David and his book…enjoy!

Software Testing as a Martial Art – by David Greenlees

Foreword by Keith Klain

“Knowledge will give you power, but character respect” – Bruce Lee

Integrity matters. It matters in all aspects of your life but in the software testing business, it is not only essential to your professional reputation but critical to our trade. Delivering unbiased information that is context aware is a difficult charge and one that is frequently compromised for the sake of expediency. I have worked in the software testing business for over 20 years and have managed, hired and worked with thousands of testers from all over the world. The single most important trait I have seen in the best testers on the planet is a strong sense of integrity – integrity for their work, ethics, and professionalism. David Greenlees is one of those testers.

I have had the pleasure of knowing David through our industry for several years and now the honor or working directly with him as a colleague. I can tell you that the principles and values he writes about in “Software Testing as a Martial Art” are ones that he lives on a daily basis. When David writes about adapting to survive in the environment you find yourself, I have seen him change techniques and shape messages to his audience. He’s pragmatic, sensitive to his surroundings and uses empathy to diffuse situations to focus on problems not people.

So why should you read this book.

If integrity is important to your development as a software tester, pragmatism is very close to being next on the list. David provides great examples of practical applications of what he has learned through his career through success as well as failure. There are worked examples, use cases for techniques, and loads of great advice for testers at every level of experience. He also provides a great primer for entry into the Context Driven Testing world for those who share his love of community.

Additionally, aside from being rich with analogies between martial arts and software testing my favorite part of this book are the many points that David makes that testers can use as “koans”. Webster’s defines a “koan” as a paradox to be meditated upon that is used to train Zen Buddhist monks to abandon ultimate dependence on reason and to force them into gaining sudden intuitive enlightenment. How do I maintain context-driven principles while adapting to my surroundings that use “best practices”? How do I develop testing skill while not creating “muscle memory” bias?

So enjoy, meditate, and most importantly take David’s advice to get away from the “bags and pads” and practice your testing skills, for as the master said – “If you spend too much time thinking about a thing, you’ll never get it done” – Bruce Lee.

Code/Interactive: THE DIVERSITY IN TECH AWARDS

Screen Shot 2016-02-19 at 11.25.45 AM

Education, Infrastructure Pose Challenges for Tech in the Bronx – The Fordham Ram

Education, Infrastructure Pose Challenges for Tech in the Bronx – The Fordham Ram

October 21, 2015

By Cailin McKenna

“The discussion also focused on incorporation of minority youths into the industry. “By 2020, the demand for technology resources is only going to be met by about 60 percent by people who come from universities,” said Keith Klain, co-CEO of Doran Jones, a technology consultant firm located in the South Bronx. “There is a huge opportunity to keep those jobs in New York by providing people with alternative backgrounds access to those jobs.””

Read More

Homework – Making the Best With What You Got…

“A lot of young people no longer see the trades and skilled manufacturing as a viable career, but I promise you, folks can make a lot more potentially with skilled manufacturing or the trades than they might with an art history degree.” – President Barack Obama

When it comes to useless college degrees, according to the President, I might possibly have hit the lottery. Art. Of all the things you can study at university, art has to be the subject most often associated with useless, navel gazing, impractical pursuits of higher learning. I mean, what could you possibly do with a degree focused on creativity, communicating abstract ideas, and viewing things in their appropriate context?

And according to PayScale, (Figure A) not only did I get a worthless degree I also should land just somewhere above a barista on my earning potential! Starbucks, here I come!

PayScale - Majors

Figure A: Why I should be Broke

So sarcasm aside why didn’t that happen? Why doesn’t my resume read like a list of fast food and coffee shops? Looking back on my studies and career, there are common themes to my success (and failure). As well, you can apply those heuristics to other people I’ve tried to pattern myself after regardless of their degree.

Much has been made recently if getting a college degree is necessary in the age of information. And although I still believe that having a degree has never “hurt” my career, what you actually DO with that degree is as important as the letters on the diploma.

And as well, as someone who has always felt that educationally, I bring a knife to a gunfight, the most important aspect of these themes is that they are completely under my control. So to all my fellow art majors and everyone else who feels that their degree (or lack of one) is an obstacle, here is my list of attributes that got me to where I am today.

Attitude

If Woody Allen was right, and 90% of success is showing up, then I would say the same percentage applies to your attitude. I would rather work with a less skilled or educated person with a great attitude than with a “smart jerk”. I’ve seen countless PHD level employees or otherwise over-educated candidates that can’t figure out why they aren’t getting promoted, raises, or just generally more success at work. Almost to a person, the common denominator was a bad attitude.

Knowledge work is hard enough and the problems we are often trying to solve are abstract, so lets not make it worse by being difficult to work with. In my experience a healthy dose of empathy, teamwork, and mucking in and getting on with it go a very long way on the road to success. I’ve never seen much value in complaining and if perception is reality, creating a reputation as being someone people want to work with is easy to do and makes a big impact.

Opportunism

I have always considered myself a “dyed-in-the-wool” opportunist. I volunteered to help start up our college tutoring program even though I wasn’t and education major. (Figure B) I said “yes” to my first international move to London having never been there. I agreed to give the overview of our SQM practice performance at the annual company meeting even though I never spoke in public before. I volunteered to pilot building a team in India at UBS.

Tutor ID

Figure B: Evidence of Two Things

There are countless other examples of opportunities I either took or created to put myself in a situation to learn or add value to my company. Too often I see people self-select out due to fear when they should take Richard Branson’s advice: “If somebody offers you an amazing opportunity but you are not sure you can do it, say yes – then learn how to do it later!”

Hard Work

So much of our lives is down to nothing more than dumb luck. Where you were born, who your parents were, how much money you had growing up, and a whole bunch of other variables (completely out of your control) contribute a great deal to your chances in life. The longer I’m around and the more people I observe, I come to realize the large part luck has played in my own career. So if so much of it comes down to luck, what part can we play in guiding our own destiny?

That boils down to one thing for me: hard work. How hard I work is one of the few things I have complete control over. I am definitely not the smartest guy you’ll ever meet. I had a State education in a degree that doesn’t do a lot for me. But I will work harder, longer, and do more research than just about anyone you’ll meet. I am determined to do more with the limited resources I have available than the competition, and I’ve seen that this one thing has been the primary differentiator in my career.

Summary

Looking back on my career, I’ve lived abroad for almost 10 years in the UK and Singapore, travelled to 18 countries for business, worked for multiple Fortune 100 firms, and run teams of thousands of testers worth millions in budget. Now I’m the co-CEO of my own company building an outsourcing center in the US and talking about software testing at the White House! Living a charmed life? Maybe, but I like to think I am living proof for all us “art majors” that attitude, opportunism, and hard work are ultimately what’s going to make you stand out from the crowd.

Good luck and keep learning!

Paying it Forward – Why I’m a Speak Easy Mentor

The first time I spoke in public was a complete disaster.

I was working in London as a Managing Consultant running the Software Quality Management practice for the UK and the MD of the region asked me to give an overview of the business – at the annual meeting of the entire company! Now I had spoken dozen of times in private at project and sales meetings, but this was different, as I had never gotten up on stage to present in front of hundreds of people. But being filled with my usual unwarranted self-confidence, I readily said “of course” when asked and then set about trying to figure out what I was going to do.

Some of the preparation was easy. Compiling the stats on sales, profitability, and the usual business stuff that was in the hundreds of annual meetings I had been subjected to, but there was something different the MD had asked for in this presentation that threw me for a loop. They wanted to know my opinion on the testing industry! Never being one to shy away from giving my opinion (unsolicited or otherwise), I was well versed in spouting off about what I thought was wrong with our industry. But as I was soon to learn, talking in the pub with colleagues is a very different game (at least to it was to me) from standing in front of my entire company and formally presenting my ideas.

I’m a firm believer that most of what happens in your life is down to luck and the only part you can control is how hard you work, so as I did (and do) set about researching all the things I wanted to say about software testing. Unfortunately, that didn’t seem to amount to much! Of course I had opinions, but as I tried to validate them I quickly realized that my “factory” view of testing was not easily supported by science. Panic set in. I cobbled together some stats and graphs and even chucked in that timeless classic – the rising cost of defects curve!

On the night of the event I could feel the pressure about to pop which had built from when I woke up to nearly unbearable proportions as they called my name to the stage. My hands were sweaty and shaking, I couldn’t remember any of the witty things I wanted to say, and basically by the middle of the 2nd slide I had completely bombed. I remember looking at people in the audience trying to read their expressions and feeling that what was going through their heads must have sounded like this:

Although my friends at the event told me that it wasn’t as bad as I thought, I could tell that my performance was lacking. So instead of giving up, I went about trying learn everything I could about public speaking but missed one essential trick – finding a mentor. At the time there wasn’t a great network of public speakers like Speak Easy to connect people to the kinds of people who want to help new speakers find their voice. I could have greatly benefitted from an objective reading of my material, a walk through of my story, challenges to my opinions, and at the very least a private walk through of the talk  to a friendly face. And that is what’s so great about the Speak Easy program – increasing diversity in tech conferences through paying it forward from experienced speakers.

Over 15 years later, I’ve spoken and presented in public at loads of conferences, media events, and industry forums and over time have found my voice and confidence. But  I still carry with me that moment in time when I was scared out of my wits waiting to go on stage all those years ago. Hopefully through this program I can share those lessons learned (and others) with new speakers and let them know that everyone started somewhere!

Tech Heads to the Bronx – Marketplace

Love this story on the Doran Jones UDC in Marketplace! Listen to the story here

Tech Heads to the Bronx

Hoping to take advantage of a growing trend to bring IT jobs back to the U.S., a technology consulting firm is setting up shop in one of the poorest neighborhoods in the country, hoping to create a viable business and serve a philanthropic purpose at the same time.

That neighborhood is the South Bronx of New York, where within a two-mile radius there are five large housing projects and where 38 percent of the population lives below the poverty line, according to the 2010 Census. It is the poorest congressional district in America.

The company trying to inject tech jobs and spending power into the neighborhood is Doran Jones. Keith Klain is the company’s co-CEO. Klain spent more than 20 years setting up IT operations overseas. Now, he’s trying to bring some of those jobs back to the U.S. and into the South Bronx.

He joined Doran Jones from Barclays, where, until February of 2014, he ran their Global Testing Center. Now, he’s building a 45,000 sq. ft. facility in a nondescript building just across the river from Manhattan — the U.S. financial capital where a wealth of finance firms and other businesses are potential clients for the IT quality testing business he is starting up.

“This is a viable business. We’ll transform this neighborhood with real tech jobs,” says Klain.

The most important element of his plan, and what sets it apart from some of the other tech start-ups and incubators who have moved into the Bronx area to take advantage of one of the few areas of New York with relatively cheap rent, is a partnership with Per Scholas.

Per Scholas is a non-profit workforce training center. Klain set up a courses at the center specifically tailored to teach the kinds of skills he needs from entry-level workers. He has an agreement in which Doran Jones funds the free courses, and promises to staff 80 percent of its workforce with Per Scholas graduates.

Per Scholas also gets 25 percent of future Doran Jones profits. They even share a building.

Angie Kamath, executive director of Per Scholas, says the unique arrangement is an opportunity to change the dynamic in the South Bronx neighborhood.

“It’s going to, I believe, really kickstart and show other firms that they can, too, locate in what are traditionally underserved areas,” Kamath says.

“Part of this program is to give people an entry into a career that they wouldn’t ordinarily have gotten,” says Klain, “There is an overlooked population here that is a very rich source of tech talent.”

One of Klain’s most recent hires is Cochrane Williams, 37, who used to be a photographer with sporadic income. “I needed a career change. I needed something stable. I have a daughter. So I needed to also think about that,” Williams says.

Just as other entry-level workers who will be hired by Doran Jones through its partnership with Per Scholas, Williams’ starting salary is $35,000 with benefits. While that income does not go far in one of the most expensive cities in the country, entry-level Doran Jones employees will get an automatic raise to $45,000 in a year, and to $55,000 in two years.

“For a lot of our folks who are coming in and their last wages were $15,000, this is pretty life changing,” says Kamath. Many in the neighborhood hold minimum wage, or close to minimum wage jobs, she says, such as security guards or retail workers.

Williams says his starting salary was not his only consideration in deciding to join Doran Jones. He sees it as an investment in his future. “This is basically getting in on the ground floor. And you get to grow with the company. There is nothing more solid than that, in terms of trying to establish a career,” says Williams.

Doran Jones and Per Scholas are also hoping to get in on the ground floor. For them, that ground floor is a growing movement to bring IT jobs back to the U.S.

A number of firms have sprung up across the country to lure lucrative IT contracts away from foreign firms. Their pitch: that for certain IT jobs, being located near a business’s time zone, for instance, could be beneficial. They also can point to inefficiencies in the current outsourcing model: the need for lots of travel, or the hiring and relocation of middle managers to supervise far away employees.

Ron Hira has been studying the trend of IT ‘onshoring.” He is a professor of public policy at Howard Unviersity and author of the book Outsourcing America. There are a number of small companies around the country, most with a few hundred workers, he says, that are trying to win away IT contracts from foreign firms (which can have workforces in the hundreds of thousands).

“I’d say this is a small blip right now. But it has the opportunity to become a serious market niche, as much as 15 to 20 percent at some point,” Hira says.

The key will be for U.S. companies to grow beyond employing hundreds, says Hira. That goal faces hurdles such as tax incentives that unintentionally favor ‘offshoring’ by allowing companies to retain their profits untaxed overseas, he says.

“I’ve been approached by multiple other cities in the country that are looking at this as a kind of a case study: can this be done?,” says Klain.

Klain will open the doors of his new Bronx technology center in March. He has 15 clients lined up, and hopes to initially hire 150 people — and eventually, 450. Also, he says start-ups have already approached him about leasing space in his new center. A small sign that his hoped-for urban renewal of the South Bronx just might come to fruition.

Featured in: Marketplace Morning Report for Friday, February 6, 2015

Diversity in Tech – Making the Future Today

This is an article I wrote for the January 2015 issue of Women Testers…you can read the entire magazine here…enjoy!

“The future belongs to those who believe in the beauty of their dreams.” ― Eleanor Roosevelt

I am a product of my environment. I have benefited from a lifelong positive model for diversity starting with my mother, to my wife, multiple bosses, friends, to my industry colleagues. Strong, intelligent men and women who inspire and challenge me, and make me think differently about who I am and how I see the world have surrounded me for as long as I can remember. I am grateful for that experience, but I realize that not everyone has had the advantages that I have enjoyed. As well, part of the social contract, as Elizabeth Warren says is to “take a hunk of that and pay forward for the next kid who comes along.”

Screen Shot 2015-01-03 at 7.18.55 PM

Having spent twenty years in large, multi-national companies working on countless Human Resources exercises trying to work out why diversity is such a problem, I can tell you, Einstein’s view on “the same level of thinking” reins supreme. To crack a “problem” as large as the one that some of these organizations face, complete reinvention is required – something that most individuals, let alone 1000+ person workforces cannot easily accomplish.

Increasing diversity in technology has just about entered the phase of Corporate Social Responsibility (CSR) where everyone and their brother (pun intended) has started some initiative to increase their footprint with some underrepresented group. Looking to the future, this trend seems to be increasing with CSR commitments becoming the new standard to govern decisions from whom to do business to where we want to work.

Diversity versus inclusion…

In my experience, the trajectory for identifying, hiring, training, and developing people in large companies is so protracted; adding another source of candidate flow is nearly doomed from inception. Moving the needle on something so large and pervasive as the lack of diversity in technology requires a complete rethink of the issue at hand, and that means changing the game.

Unfortunately, the real problem with companies is not the lack of diversity but the lack of inclusion, regardless of what the workforce looks like. Inclusion according to the Harvard Business Review is how you “create an atmosphere in which all people feel valued and respected and have access to the same opportunities,” and inclusion is where the diversity rubber hits the road. I’ve sat on multiple senior executive boards discussing the progression towards our targets in a room comprised entirely of middle aged, white men. Worse than that, two and three layers deep into the org charts the demographics looked exactly the same – and no amount of target setting is going to change that fact.

So while setting targets to increase your diversity footprint may have some merit, in my opinion and experience, if that’s your approach – you’re doing it wrong.

You’re doing it wrong…

At the New York Tech Talent Summit last year, I was on a panel discussing workforce development and our work at Doran Jones with Per Scholas. During the Q&A I was asked what I thought could be done to increase the amount of women hired in technology. My answer: create new companies that make diversity an underpinning of their business model. There are clear benefits of a diverse workforce from marketing, to culture, and strategy, so as far as I’m concerned, the problem is not with the workforce – the problem is with the companies.

According to this Forbes article, that idea might not be as far off as it seems. The authors feel that building diversity and inclusion from the inception of a company is the quickest way to address the divide, as a “startup, short on history but long on seeking the best talent, provides a good platform for establishing an inclusive organization and work environment.” My view is that like everything else in the corporate world, when market scrutiny increases on CSR and potentially crosses into Federal regulation, companies that have a gap will come looking to “buy” a solution anyway.

There is a demographic sea change happening broadly across the workforce and technology is subject to the shift. In my opinion, companies that build diversity into their DNA and have inclusion as a principle instead of a target will have the best chance to be successful in the new economy. The beautiful part of my dream for the future is that you won’t have to worry about changing an organization to match reality – because those that don’t will no longer exist.

Life as Reinvention – Scott Berkun

Absolutely love this post by Scott Berkun that was inspired by my experience with the masters of reinvention at Per Scholas…enjoy!

Life as Reinvention – Scott Berkun

“There is a list of sayings on a whiteboard near my desk that I can’t help but notice several times a day. It contains ideas I try to remember, things I forget are true and important about the life I want to have. Near the top of the list is this one: you could be dead. It makes me laugh every time I see it, for reasons I can’t entirely explain. The part I know will make the most sense to you is how when we’ve been alive for awhile, we forget what being alive means. We slide into a paper cage of our own habits and forget that with a little effort we can slide our way into new habits too. I can stand up whenever I want. Or sit down. Or put on some music, or close my eyes and lose myself in silence. I could dance, scream, stand on my desk, or anything I choose to do. Anyone can do an infinite number of different things, small and large, in this or in any moment as long as they are still alive. But I forget. We all forget. We live many of our waking moments asleep in a dream of our own invention, a dream of boredom and regret that we don’t even enjoy. We become familiar with our favorite memories and allow ourselves to believe the feeling of familiarity is an acceptable replacement for investing in the life we have today.”

Link to full article

Urban Onshoring: The Movement to Bring Tech Jobs Back to America – WIRED

It’s Not Altruism. It’s Business.

“The Urban Development Center lends this whole urban on-shoring concept some serious street cred, primarily because of a man named Keith Klain. Klain is the co-CEO of Doran Jones, and the driving force behind the center, but before that, he spent years as the head of global testing for Barclay’s Capital, traveling the world setting up and managing software testing operations in India and Kiev. For Klain, bringing these jobs back to the US is not just altruism. It’s business.”

Link to full article

Can Software Testing Save the World?

Don’t worry about people stealing an idea. If it’s original, you will have to ram it down their throats. – Howard Aiken

If you have followed my career, you know that last year I left a job that I loved to start a testing practice with Doran Jones. Besides wanting to finally build the software testing business that I’ve always wanted based on the principles of Context Driven Testing, I also had an opportunity to turn the work I’d been doing with Per Scholas through the Software Testing Education Program into a functioning consultancy. Through working with my partners at Per Scholas, I have had a good look into the current efforts to fight poverty through increasing skills and education, as well as how funding works in the public sector for organizations like Per Scholas.

I have intentionally not spoken in public about the non-profit world and my experience in dealing with the public sector because I want Doran Jones to be judged by the quality of the work we do – not the social story of STEP and the UDC. But recently, thanks to my pal, Anne-Marie Charett,  I got to give a talk at the Australian Computer Society about our partnership with Per Scholas, and I realized that I do have something to say – and this is my meager attempt to put some of those ideas to pen and how I think we at Doran Jones are doing our bit to change things for the better. Bear in mind, I am NOT an expert in public policy so all ideas, generalizations, and conclusions are my own and based entirely on my brief (but very intense) experience in the world of workforce development, non-profits, and public funding.

I firmly believe that to effectively fight and break the cycle of poverty and meet the growing demands of a technically skilled workforce, there needs to be a fundamental shift in the current efforts in the private and public sectors. In my limited public sector  experience, the focus of workforce development programs has been primarily on training for open roles that are being advertised by companies, resume writing, and other career assistance services. In concert with this, there is a concerted effort to affect “outcomes” through influencing policy and the legal framework through which organizations conduct themselves and Corporate Social Responsibility.

While those activities are important and vital to solving systemic problems related to the cycle of poverty in America, they do not address the immediate needs of employers and potential employees in regards to getting a job and building a career. It is my belief and experience that workforce development programs need to address three activities: core skill development, re-engineering of Human Resources, and public sector pressure on contractors and consumers of their services, to successfully meet the demands of employers.

Per Scholas STEP Graduation

STEP Graduation at Per Scholas

Information technology is a dynamic industry with rapidly changing language and ideas. Too many IT training programs focus on a shallow understanding of current technology trends and not development of core skills like design, critical thinking, communication, and creative problem solving. The Doran Jones Software Testing Education Program (STEP) provides a practical understanding of working with technology along with hands-on experience testing software in real project environments. Our focus on strategy, heuristics, and communication give students core skills and the confidence to do on the job regardless of the technology environment or industry.

Corporate Human Resources departments are gatekeepers for the sources of candidate flow onto hiring manager’s desks. Workforce development programs must be integrated into the process and I believe, become part of the core HR operating model. We have successfully adapted STEP to map directly onto client hiring processes therefore expanding their choices for candidates for testing jobs, and integrated workforce development as a core HR process at Doran Jones. As we expand our approach to new clients, I believe that workforce development will become not just an extra source of candidates, but replace a significant part of the traditional HR operating model.

Lastly, pressure must be applied from the public sector to make sure private entities that contract with and use public utilities and services use US citizens that enter the work force through non-traditional means. Doran Jones is working with the White House Office of Science and Technology Policy and the Department of Veterans Affairs to integrate STEP training into their Accelerated Learning Program pipeline to make sure that organizations are incentivized to work with workforce development programs. As well, our UDC model will create 100s of IT jobs (500 to start in the Bronx) in the US that would ordinarily be staffed offshore and we are negotiating partnerships with three other cities to deploy the same model and put Americans back to work in IT careers.

Having spent 100s of hours in meetings with foundations, City, State, and Federal Government agencies to talk about how we improve outcomes for workforce development, I believe this is a model that can move the needle on the non-systemic causes of poverty. I look forward to learning more and working with my partners at Per Scholas to see if software testing really CAN save the world – or maybe just our little part of it.

Thanks!

ISO 29119 Roundtable Discussion – Part III

The debate around ISO Standard 29119 has intensified after James Christie gave his talk  at the Association for Software Testing conference this year, which started a movement organized by the International Society for Software Testing and a petition to stop the publication of ISO Standard 29119. The following is a transcription of a roundtable discussion between veterans of the software testing industry to set the context for the opposition.

PART III

Keith Klain (KK) – So why has the opposition to 29119 only recently started? The standard has been in development for years now, so why all the recent action regarding it?

Iain McCowatt (IM) – That’s James’s fault!

Michael Bolton (MB) – I actually have a theory about that, I’ve been grousing about this for several years, and other noisy colleagues of mine have been as well. It’s when the quiet ones speak up, that’s when you start to get some attention. I think there was just a confluence of community that was galvanized by his talk. I think James put the hammer right on the head of the nail when he brought up the idea of “rent seeking”, and the appeal to fear to a community that really hadn’t recognized this message.

But the simple answer is, we’ve been busy and we have been talking about it quietly, and James put a match to a pile of newspaper. It was ready to happen, and it finally did happen.

KK – So James was there anything in particular that caused you light the pile of newspapers you were sitting on?

James Christie (JC) – It’s something I’ve been thinking about and writing about on and off for the last couple of years, but it was at EuroSTAR in Gothenburg last November there was a short discussion just on the exhibition floor. Stuart Reid was defending standards and he kept on shifting his ground saying, they weren’t compulsory, it was up to clients to decide whether or not they wanted it, it was up to individual companies.

And I thought it was just so evasive, he was pushing something that I thought would be damaging and he was doing in I thought, a rather evasive way, not addressing the real concerns. So when I got home, I wrote a blog about that, and I followed up with that, and Keith suggested that I might put in a talk on a similar basis, so I thought about it and did that. So it wasn’t something that I actually planned, but once I started off on that thinking about what I should be doing after that discussion in Gothenburg, it all just fell into place.

MB – By the way, I’ve had that same conversation with Stuart Reid myself in 2007, 2008, 2010, 2011, and it’s always got the same results.

JC – I think its important to stress that the oppositions always been there for a long time, but its only just got organized. I think its because people have realized that its not just enough to oppose, it requires a level of organization otherwise the opponents can be dismissed as irrelevant, disaffected individuals. It’s important that there is some consistent campaigning from organizations, and that it is organized rather than just individuals.

IM – I think all of the pieces were in place and a lot of peoples heads were in the right place when it comes to opposition to the standard, but James provided the “call to arms” we needed.

Ilari Aegerter (IA)  – James was certainly the trigger to get the higher intensity of the opposition to 29119, but it wasn’t the start, just an intensification of the effort. This is also a result of the organized nature of the opposition and therefore the perceived intensity shift is not a start, but a continuation.

KK – So Iain, can you tell us a little more about the petition?

IM – So after the Q&A after James presentation at CAST, I asked the question, what do you think we can do about it? And actually, Karen Johnson threw a “red card” at that point and spoke briefly about things we could do. And it was conversations that stemmed from that, that created the petition.

One of the major objections that we have to the ISO standard and the way that its being produced, is the way in which its being produced by an exclusive club, that haven’t taken into account the views that the wider testing community have about how to do testing and how to do testing well. There is no consensus to the standard, so our petition calls upon ISO to withdraw the first three parts of the standard, those that have been issued, and to suspend publication of the remaining two parts, on the basis that there is no consensus and that there is sustained opposition to the publication of these standards.

IA – I wanted to add that the International Society for Software Testing is helping in organizing all the opposition backing the petition against the standard.

James Denman (JD) – I’m wondering if there could be a standard for testing, what would it look like?

Griffin Jones (GJ) – It would probably evolve out of guidance.

KK – To draw an analogy that as well, some point they will link, and we are starting to see the beginnings of that, but its like the tester certification movement and how this ties into it. And this is the logic I hear all the time, this “well what is your alternative?” right? You need to have an alternative to foundation level certification, so what is it? And my answer to that is, you absolutely don’t. I don’t think we need to create another shallow, simple test to try to counteract the negative affects of the current certification scheme.

And other than the commercial argument that’s been made, which is really the insidious part of this. Because we are talking about a commercial organization that is creating a market for selling their services and whole cottage industry of consultancies around it to sell certifications. When you get down to brass tacks, that’s the only real argument I can see that’s viable in this, and I don’t feel compelled to create an alternative to that.

IM – I don’t think there is a need for a standard, where there is a need for is testers who are dedicated to improving their skills. Testers who are dedicated to inventing new ways of testing and trying them out. It is a very, very young profession, it’s far too early to be imposing a standard, and if we do so, what we will do is stifle innovation, and everybody’s going to suffer.

MB – “What’s your alternative to bad” is always a dodge. It’s a non sequitur. What’s your alternative to something unnecessary? “Nothing” is fine. “Nothing” is absolutely a viable answer to that. We don’t need a standard because it’s not something that fits for precisely the reason that hackers and bugs and testing problems don’t come with a standard, and so there cannot be a standard response or a standard way of defending against them.

KK – And Michael, that’s an excellent point, because what you’ll see, and this is the shallowness of the arguments that they make FOR the standard – the burden is NOT on the people who they are trying to impose the standard on, to say why I don’t need this. The burden should be on the people trying to impose the standard, as to what is the commercial, industrial, and regulatory – what are the reasons, what is the actual purpose, what is the business case to why this is required.

And so far to date, I’ve seen absolutely no reasonable or viable as to why this is needed. So the burden is not on the people who are going to be regulated – the burden is on the people who are going to be doing the regulation.

MB – In addition to your list, Keith, we should add the societal benefits. What are the societal benefits, and what are the societal costs?

IM – Where are the arguments, where is the evidence that will stand up to any kind of scrutiny at all? There is none.

JC – They aren’t even pretending that there is any evidence. The are trading on the ISO brand name, the fact that it is a standard, and that means that many peoples responses will say, “well it’s a standard, that must be a good thing, what is your justification for rejecting it?” They are able to turn around the argument; they don’t feel they have to justify themselves because they are ISO.

GJ – This may just be a case where, it’s inappropriate to apply standards in this particular case, in this particular set of industries, in this particular field. If the ideas being proposed are so good in the standard, quite frankly, publish them as guidelines. Demonstrate through sustained commercial efforts that they have broad applicability and commercial success and appropriately deal with risk. If they are successful, they will be adopted and everyone will accept them. But they (ISO) don’t choose to go that path; they use coercion or implied coercion.

JC – Agreed.

MB – The mechanism for which its (29119) is created is odious. One of the responses to “why is this (opposition) only happening now”, is it takes an incredible amount of funding and time in order to participate in this. That favors two things, it favors people who are going to make money off the consulting and certification services for this standard, and it also favors people who are in it for the long haul. This has dragged on for six years, it has taken them that long and essential the opponents to it die of attrition. They are out of the decision making process because, who has time for this? And the convener is willing to keep it that way; they will wait it out because there is bucks in it for them.

GJ – So it’s how this was formed, it’s how it’s being imposed, and it’s the details – other than that, we like it!

ALL – (Laughter)

GJ – I started out when 829 first came out, it took seven years to get people to finally acknowledge, “829 was kinda nice in theory, but when we tried to do it wasn’t practical and we got all screwed up”. It took our company two or three years to get unwound from that, and I don’t want to go through that again – I already did that.

And this one’s (29119) even worse, because its scope is so much larger and its implied hubris, well, its not implied it IS hubris – that it applies to all software development across the world is just, I thought ridiculous at the face of it and it would die of natural attrition. But I was wrong.

JC – 829 just dealt with documentation, 29119 deals with that and the processes too. It is explicitly more ambitious and damaging.

IM – I’ve worked with a lot of client organizations where they’ve attempted to standardize in one shape or form. Whether it’s by adopting some thing like TMMi, or whether it’s establishing some kind of internal process or internal standards that they’ve rolled out across the organization. And in pretty much every case, the process and the documentation took over.

JC – That is exactly the topic of my talk at EuroSTAR in Dublin this year. Its about two projects I worked on, they were the same project – I was the test manager and I ran the documentation to make sure we got paid by the client. I made sure their documents got through the quality gates, everybody was happy, we got the money and my deputy was doing the real testing which bore no relationship to the paperwork at all. They were essentially two different projects, and the testing had now gone underground.

IM – And here is something that is cited as making testing more efficient and more effective – its laughable.

########################################

Michael Bolton is a consulting software tester and co-author (with senior author James Bach) of Rapid Software Testing, a methodology and mindset for testing software expertly and credibly in uncertain conditions and under extreme time pressure.  Michael has 25 years of experience testing, developing, managing, and writing about software.http://www.developsense.com

Iain McCowatt is one of the founders of the ISST, and the author of the petition to stop ISO 29119. His day job is as a director in a bank, helping large enterprise IT programmes to solve complex testing problems and gain insight into the quality of their software. http://exploringuncertainty.com/blog/

Griffin Jones is an agile tester, trainer, and coach, who provides consulting on context-driven software testing and regulatory compliance to companies in regulated and unregulated industries. Owner of Congruent Compliance, Griffin has been participating in software development for over twenty-five years.  http://www.congruentcompliance.com

James Christie has over 30 years of experience in IT, covering development, IT audit, information security management, project management and testing. He is now a self-employed testing consultant based in Scotland. http://clarotesting.wordpress.com/

Keith Klain is the CEO of Doran Jones Testing and has over 20 years of multinational experience in enterprise-wide testing programs, Keith has built and managed global test teams for financial services and IT consulting firms in the US, UK, and Asia Pacific.www.qualityremarks.com

Ilari Henrik Aegerter is President of the International Society for Software Testing where he wants to bring back common sense into testing and oppose wasteful practices. He has been in software testing for the past 10 years, most of the time as a manager of testers. http://www.ilari.com/

James Denman writes, edits, and manages the production of content for SearchSoftwareQuality.com. His job is one part editor, one part reporter, one part copywriter, and three parts whatever else needs doing. http://searchsoftwarequality.techtarget.com

ISO 29119 Roundtable Discussion – Part II

The debate around ISO Standard 29119 has intensified after James Christie gave his talk  at he Association for Software Testing conference this year, which started a movement organized by the International Society for Software Testing and a petition to stop the publication of ISO Standard 29119. The following is a transcription of a roundtable discussion between veterans of the software testing industry to set the context for the opposition.

PART II

James Denman (JD) – So is the problem that ISO 29119 is standardizing wrong, or is it that there cant be standards that work?

Michael Bolton (MB) – Well certainly the former, and imagine for example, and this is an metaphor that comes to me from James Bach, so imagine that somebody declared, somebody in the restaurant business declared that there was only one kind of food. One kind of cuisine, we’re going to mandate one kind of cuisine. I’m wondering how the other cuisines would react to that and also wonder how diners would react to that.

This (ISO 29119) is supposed to cover all software development projects, and the veiled threat is that showed up in one of the promotional pieces, was that if you’re working on you’re own in a garage, then this shouldn’t effect you, but otherwise it should. Well, that to me is mischief.

I run a web start up, a little start up company, I am about to sell services to a large institution that takes regulation and standardization very seriously. Well, first of all they don’t discern very well whether this is good standard or not, but should the way I work in my small outfit with my small number of programmers and even smaller number of testers, should I displace the goal of producing high quality software with the goal of producing volumes of documentation and following a process model that doesn’t fit the way we actually build software. It doesn’t make sense to me.

Griffin Jones (GJ) – To elaborate upon your point, Michael, there’s goal displacement also if you look at 29119, I think what’s happening is the test group is attempting to impose its standard across all the other software development disciplines. How can you implement 29119 without impacting product development, the developers, the project management people and all the other roles creating the product?

And yet, did they involve any one of those other roles in the creation of their standard? Did they address those other stakeholders concerns? I don’t see it.

Keith Klain (KK) – That’s really an excellent point, Griffin, and I tried to pick up on this earlier, but the interdisciplinary nature of developing software means you cannot put testing in a box and treat it as a kind of factory. And that creates a big problem when you look at testing as something that can be isolated, and can have disastrous on one, the quality of the software you are trying to produce, but as well, all the other things that happening around it by wasting money, creating useless documentation, goal displacement for the testing group, etc.

It has a much wider implication that just testing, and I’m waiting for when the software development or agile community is going to join the fray on this, because you cannot implement this standard (29119) without having profound affect on development, design, project management, it just can’t be done.

Iain McCowatt (IM) – I think a more realistic outcome Keith, is that testers simply become irrelevant. Imagine a test team saying, we’re now following 29119 and you developers can’t do agile anymore because we won’t be able to test it. You’d be laughed out of the organization.

GJ – The analogy I like to use is, since testing is about finding information, it’s the nervous system of the body. So if 29119 suddenly implements a process that doesn’t allow the nervous system to integrate with the rest of the body, something bad is probably going to happen to the body.

James Christie (JC) – But isn’t there a danger that some big companies or maybe government departments will insist on 29119 compliance, because it’s the only standard they can fix on, there isn’t a development one that they can focus on, so they will insist that suppliers be compliant with that, and therefore, developments would have to comply with that. I noticed a LinkedIn discussion the other day, there was a test manager bemoaning the fact that they were having to comply with 29119 because their customer was insisting on it.

Ilari Aegerter (IA) – There are other examples of that in the certification business. In Switzerland, the country I’m from, many large corporations, many of them in the financial and insurance sector actually won’t allow people on a tester interview unless they are certified. And I anticipate the same thing will happen with 29119, just on a larger scale because people are willing to refer to something they don’t understand even though it won’t do the job very well.

MB – I’m not sure that any of you have seen this:

ISO9000 ALL – (laughter)

KK – I think this is one of the big dangers here, James (Denman), bringing this back full circle to your point, about is 29119 bad or are standards bad, and I think it’s a bit of both. What you’ll find is that the companies that have ability and resources to make themselves compliant to the standard will spend all their time doing that. And particularly when you’re looking at the public sector where you’ve seen this, in that industry where people are compliant to the standard and deliver terrible services.

I’ve seen this as well in the financial services industry, where they don’t know anything about the actual work, but can see that we ticked all the boxes. So it looks like, “can I show that I’ve met the standard of what a test case is meant to look like?” Absolutely. Can I show you I ran thousands of them? Absolutely. Was that testing worthwhile? Was it good testing? Did we get any interesting information? Well, that’s a much harder question to answer, and the standard will not in any way help answer that question, and in fact, will mask, and has a strong possibility of actually clouding your judgment.  Because a standard, particularly in knowledge work, gives the appearance of it being good.

IM – That’s right Keith, it’s a placebo. Testing can be hard and testing can be complex, but a lot of people who buy testing want it to be straightforward and simple and the kind of thing you can wrap up in a document. That’s the real need that I see fulfilled, it (29119) takes something that’s hard work and involves understanding people and their needs and desires, and it tries to reduce it to some thing that fits on a page in a process model.

And that sells, because a lot of the people who buy this stuff don’t want to understand it, they want simplicity, they want an easy answer.

GJ – It sells also because it works – up to a point.

JC – A major part of my objection to 29119 is because of my background not just in testing, but when I worked as an auditor and also as an information security manager. When I worked in security, one of the banes of my life was auditors and the ISO 2700 family of security standards. The internal auditors would come in and they would expect a common standard driven approach to security, and one of my clients was a pharmaceuticals company that was dealing with the FDA and another one was a high street retailer that had to move really fast.

And the auditors would refuse to recognize the importance of the context the client was in. They were focusing on the ISO standard and the internal variants on the standard. It was a pain for us and was extremely damaging for the client too, especially the retailer who found that key technical staff were being tied up in gold plating security work, particularly against irrelevant risks. They weren’t available to help them move fast in the marketplace and so it was creating real commercial risks addressing irrelevant IT security risks.

When I worked as an auditor, it’s a difficult job, because you have to audit areas with which you’re not familiar. And a standard offers something reassuring to cling on to, auditors are always looking for a benchmark, a basis for their audit, and a standard can provide that and it can give them easy answers. But it doesn’t tell them the right questions they should be asking in a particular situation.

Auditors do cling on to standards, and it stops them growing. It keeps them at poor quality, inexperienced auditors who are just running through the same script over and over again, and the way I see ISO 29119 being pushed, it’s appealing to that school of poor quality and diminishes the role of the good auditor.

GJ – Hear, hear!

IM – It’s interesting to note, James that you mentioned security standards, as its very interesting to note that there is no standard for hacking. Nor is there a standard for writing bugs in applications.

MB – Right! You read my mind, Iain!

ALL – (Laughter)

IM – It amazes me that whilst it’s obvious that you can’t have a standard for putting bugs in that the difficulty of standardising finding then isn’t similarly obvious.

MB – I’m sure there’s a committee working on a hacking and a bug making standard, Iain.

ALL – (Laughter)

KK – Now that’s a standard I could get behind!

JC – One of my colleagues, a fellow security manager made a very revealing comment. It was a very cynical comment, he said, our job was not to protect the clients, it was to protect our company. It was to protect our company’s reputation so that we had our backsides covered if there was a problem. It wasn’t to protect the client. And there is similar sort of mindset with the 29119 lobby, we’re selling peace of mind. It won’t get you better software, but you’ll be bulletproof when it comes to the investigation of why things went horribly wrong because you are following an internationally agreed standard.

GJ – It creates an appearance, but I assert that it’s a paper mache shield for the organization. When the bad thing eventually happens, it won’t matter.

#####################################

Michael Bolton is a consulting software tester and co-author (with senior author James Bach) of Rapid Software Testing, a methodology and mindset for testing software expertly and credibly in uncertain conditions and under extreme time pressure.  Michael has 25 years of experience testing, developing, managing, and writing about software.http://www.developsense.com

Iain McCowatt is one of the founders of the ISST, and the author of the petition to stop ISO 29119. His day job is as a director in a bank, helping large enterprise IT programmes to solve complex testing problems and gain insight into the quality of their software. http://exploringuncertainty.com/blog/

Griffin Jones is an agile tester, trainer, and coach, who provides consulting on context-driven software testing and regulatory compliance to companies in regulated and unregulated industries. Owner of Congruent Compliance, Griffin has been participating in software development for over twenty-five years.  http://www.congruentcompliance.com

James Christie has over 30 years of experience in IT, covering development, IT audit, information security management, project management and testing. He is now a self-employed testing consultant based in Scotland. http://clarotesting.wordpress.com/

Keith Klain is the CEO of Doran Jones Testing and has over 20 years of multinational experience in enterprise-wide testing programs, Keith has built and managed global test teams for financial services and IT consulting firms in the US, UK, and Asia Pacific.www.qualityremarks.com

Ilari Henrik Aegerter is President of the International Society for Software Testing where he wants to bring back common sense into testing and oppose wasteful practices. He has been in software testing for the past 10 years, most of the time as a manager of testers. http://www.ilari.com/

James Denman writes, edits, and manages the production of content for SearchSoftwareQuality.com. His job is one part editor, one part reporter, one part copywriter, and three parts whatever else needs doing. http://searchsoftwarequality.techtarget.com

ISO 29119 Roundtable Discussion – Part I

The debate around ISO Standard 29119 has intensified after James Christie gave his talk at the Association for Software Testing conference this year, which started a movement organized by the International Society for Software Testing and a petition to stop the publication of ISO Standard 29119. The following is a transcription of a roundtable discussion between veterans of the software testing industry to set the context for the opposition.

PART I

Keith Klain (KK) – Thanks everyone for joining, I wanted to set some context for the discussion for people who might not be familiar with ISO 29119 or the petition to stop its publication. James, you gave a talk about 29119 at CAST this year which spawned the recent movement, can you give us a brief overview of your thoughts on the standard?

James Christie (JC) – What lots of people object to is it’s an attempt by one faction to define itself as being the very embodiment of responsible, professional testing. The ISO working group have effectively defined those who disagree with them as being irrelevant at best and by implication, when you see some of the marketing material that’s pushing ISO 29119, that those who don’t comply as irresponsible and unprofessional.

If enough big companies and governments insist its suppliers are compliant, then its going to force testing down to a common and low standard, it will just be a commodity that will be easily bought and sold. It will be low quality, low status work, the sort of profession that I don’t want to work in.

Iain McCowatt (IM) – And there’s precedent for that, in terms of standards getting this kind of adoption. If you look at ISO 9000 as the example, that was pushed heavily by the UK government. The Department of Trade and Industry made grants to companies in order to help them get registered through ISO 9000, and it soon became something that you needed to register through in order to sell to the government, to the Ministry of Defense, and to many other companies.

It became a cost of doing business for many companies if they wanted to do business in Europe, and it was seized on by Offtel, as part of their regulatory regime in telcos in the UK. Ultimately, there were more than 1 million companies registered worldwide by the early 2000s. So standards can be seized on by politicians and regulators and pushed down people’s throats in that way.

Michael Bolton (MB) – The NIST in the United States is thoroughly cognizant of that too, in their communication, they refer to the fact that sure, standards are voluntary and not mandatory, but that gets slippery when a regulatory refers to them as an example of guidance documentation or recommendation of guidance documentation, or mandatory. The NIST is aware of that, and it’s interesting to see the extent to which they are concerned about it or not. The specific words are,

Still another classification scheme distinguishes between voluntary standards, which by themselves impose no obligations regarding use, and mandatory standards. A mandatory standard is generally published as part of a code, rule or regulation by a regulatory government body and imposes an obligation on specified parties to conform to it. However, the distinction between these two categories may be lost when voluntary consensus standards are referenced in government regulations, effectively making them mandatory” standards.

I’ve certainly seen clients who are basically victims of goal displacement where extra time and extra energy is spent conforming to the standard and that’s a distraction from the work of actually doing testing.

Ilari Aegerter (IA) – For those testers who are wondering why they should care about ISO 29119. Imagine you were a ballet dancer, and the “ballet standardization body” has decided that you now have to wear a straight jacket for your performances – and they just happened to be in the straight jacket selling business. I believe that this will significantly the same effect with the 29119 standard. So if you were a ballet dancer, would you object to this standard? Of course you would, and in the same way, a software tester needs to object to imposing bodies of the standards working group.

Griffin Jones (GJ) – The FDA has been really very good in terms of not propagating and imposing standards on our particular industry. And I find it really interesting that at the moment, FDAs been reluctant to cite external standards around software and they are really letting the industry develop and observing what’s happening instead of attempting to standardize around a particular set of documents or practices.

KK – Do you think that is directly about the ISO standard, or can you expand on that a little bit?

GJ – I’m in a strange place, because my regulator is saying “whoa, lets not standardize right now, lets observe what’s happening in the industry, and write guidance’s that open up the possibility of other practices”, as opposed to ISO 29119, which is attempting to impose across all industries a single way of doing things.

MB – Interestingly the FDA is in the position where they’re supervising the work of stuff that might rapidly kill people.

GJ – They’re also personally and organizationally responsible to a political body and the public for their choices and actions, both in what they chose to regulate and chose not to regulate. So that degree of responsibility I think reshapes the way that make their choices as opposed to ISO, which does not have that degree of responsibility.

IM – Griffin, do you think their (the FDA) stance would change if another large scale public failure?

GJ – That is the pattern of FDA, they tend to be driven by large, public failures of that kind that drives new regulation, and however they move relatively slowly, and with deliberation, so I wouldn’t expect a knee-jerk reaction.

KK – To add to that point, I’ve managed dozens of regulatory audits in financial services, and just recently, in the last two to three years I’ve started to see the ISO 829 standard, particularly in Asia, as a template for reviewing our test approach. So when we had failures, particularly high or medium profile ones that would get picked up by the regulators, and they would come in and audit us, they would use that as a template for review. I’ve actually had one regulator have the standard printed off in the review, so seeing the expansion with 29119 gave me real cause for concern.

IA – Well one thing is that it’s expansive but restricting at the same time. If you are using agile, then just goodnight, there is no way such a production machine of paper piles would ever fit into the agile mindset that wants to eliminate that sort of wasteful behavior like the production of procedural blocking stones. So it is expansive but also very restrictive and doesn’t fit into many contexts that are to be found in the software testing business.

IM – To Keith’s point, I’ve worked with a lot of clients, I’ve worked with a lot of project and program managers who take the view, well, there is a standard, why don’t you use it. And I think that’s exactly the kind of reaction we will expect to see either amongst the politicians or regulators if there is a disaster, or we would expect to see play out in court if an organization were sued over the quality of their software. And it came to light that they hadn’t tested within “internationally agreed standards”, I think that could potentially quite damaging for them.

JC – That’s something they have been pushing explicitly in their presentations, this appeal to fear. What would we do if there were big problems and you were asked why didn’t you follow the international standard.

IM – It is the classic “fear, uncertainty and doubt” strategy.

KK – That’s an interesting point though, because if you look at something like Healthcare.gov or any other high profile failures that impacted the public from a non-safety critical perspective. There hasn’t been a backlash to or swing towards this kind of stuff, even though I was half expecting it with Healthcare.gov, but it hasn’t happened yet. So although I think fear is a strong driver for how they are presenting this, but so far we haven’t seen it.

MB – Well they move so slowly when they are changing stuff, it takes them a couple years to catch up to this sort of thing. One of the things that’s really interesting, If you look at the presentation materials, they’ve remained remarkably consistent over the last six or seven years. To the degree that one wonders, if they’ve been paying attention to anything that happens in the marketplace at all.

They don’t actually cite this stuff, they don’t cite disasters in particular, they just have this sort of vague, ominous, “well, what IF you run into some kind of disaster”, and I think part of the reason for that perhaps, is that some of the companies who are promoting the standard so aggressively are exactly the ones that failed.

KK – Well, I’ve brought this up before about Healthcare.gov, that both vendors that built the system, one of them actually has a quality and certification practice to help your business meet standards. They are some level of CMM or TMM maturity, and both have fully compliance to international standards on their websites, and that’s one of their marketing angles. And clearly that’s proven to be completely useless when it comes to actually managing testing.

GJ – A lot of these big disasters are not actually software failures per se, or software testing failures, if you dig into James Reasons books about gigantic organizational failures, you discover that its not really the technical people at the pointy end of the stick that are the issue, it’s the large organizational, cultural factors that are causing the processes and the people to behave in ways that ultimately lead to these types of disasters.

KK – That’s a really interesting point, Griffin, so what you’re basically saying is that adherence to process and standards can drive dysfunctional behavior in an organization.

GJ – Absolutely, it creates the appearance of compliance and safety. The organization becomes lax, and what happens is, in the Swiss cheese model, one of these opportunities for disaster occurs, goes through all your safety systems and no one catches it and you end up with planes falling from the sky or the chemical plant emitting a toxic plume. But these are not technical failures per se, these are really organizational issues, and what we are seeing from the technical side are merely symptoms not causes.

IM – This is one thing that really scares me about standardization, this implicit assumption that the process will save us, and that if the process is good, the software will be good.

GJ – When I talk to executives, senior executives, especially risk management people, they are in those positions because the understand that those systems are fallible and they are always looking for “how will the plant blow up” even though everything looks and appears to be good. When you talk to senior executives, they get that – hopefully. Middle management and the organizations that are at the operational level tend to have less awareness of the fallibility of their processes and systems, and they tend to believe what their gauges and dials and processes say.

JC – This is really interesting the way this conversation has developed. Have any of you read the Cynefin framework? It’s a bit too complicated to go into now, but it helps us make sense of our problem by deciding which of four spaces in a quadrant it fits into. The spaces are Obvious, Complicated, Complex and Chaotic. There’s a huge temptation to think our problem belongs in the Obvious space, where everything is predictable and ordered, when our problem is Complicated or Complex. The danger is you just fall over into the Chaotic zone. Standards are a good fit for the Obvious zone, but not the others and they are where most software is developed.

KK – To shift gears a little bit, what do you all think about some of the arguments made in favor of the standard like common language across industry and methodologies?

MB – This whole common language trope is really quite silly. Because what it ignores is the fact that as a common language for testing, as different people from different disciplines come together on a project, the subject matter experts, the program managers, the development managers, the developers, the testers. They are all bringing different languages in every time, they’re all bringing different domain languages all the time and certainly when it comes to the business they are doing.

So what if there is a common language for testing? The programmers, the project managers will not have had the ISO standard training, and who’s to say how they should speak? Its at the stage where its as if they are trying to declare a world common language for testing, ok, lets make it Hindi! Lets make it Chinese!

IA – The standardization of language has examples of failures and it also addresses a non-problem that the clarification of what you mean is done in human interaction. That’s done actually very efficiently if people are paying attention, and having an external body dictating some sort of terminology which is not even to the knowledge of everyone involved is just a complete failed attempt for something for which there is no need for.

KK – I hear this same argument from people who are trying to centralize testing or create a “center of excellence” from vendors who are interested in consolidating everything, and this common language drive comes up quite a bit. And frankly in my twenty years of doing this, I have never once seen a project have a problem because we all didn’t use the same testing terminology. That is a phantom problem that leads into a secondary issue here, which is why should people outside of testing care about this standard?

IM – I think, Keith, the bigger problem will arise from people thinking they are speaking the same language but actually they are not. If you look at part one of the standard, the level of language that it’s defining is really quite trivial. Lets face it, we can all pretty much agree at a very trivial level what system integration testing is, and we’d all happily use the term. But I’ve run into all kinds of problems when people have different assumptions about by what they mean by system integration testing, which systems are they talking about, what integration points are they talking about, and so on and so forth.

MB – But wouldn’t a standard help? The devils advocate question, wouldn’t a standard language help in that circumstance, and if not, why not?

IM – So Michael how does a standard language help define the scope of system integration testing in any particular project I have in my portfolio? It doesn’t. Because the standards people have absolutely no concept as to my context, or what we’re trying to achieve, or what systems we are integrating. We can only work that actually by working together as people and working through it and working to get some common understanding around a particular project.

MB – Here’s a justification I heard from one consultant, and honestly this is true. She said to me, when I go into a new company they are using all these different terms, and it’s so hard to figure out what people are saying. My response to her was, boy, if that’s your biggest problem as a consultant, you’re going to face an awfully rough ride.

##############################################

Michael Bolton is a consulting software tester and co-author (with senior author James Bach) of Rapid Software Testing, a methodology and mindset for testing software expertly and credibly in uncertain conditions and under extreme time pressure.  Michael has 25 years of experience testing, developing, managing, and writing about software. http://www.developsense.com

Iain McCowatt is one of the founders of the ISST, and the author of the petition to stop ISO 29119. His day job is as a director in a bank, helping large enterprise IT programmes to solve complex testing problems and gain insight into the quality of their software. http://exploringuncertainty.com/blog/

Griffin Jones is an agile tester, trainer, and coach, who provides consulting on context-driven software testing and regulatory compliance to companies in regulated and unregulated industries. Owner of Congruent Compliance, Griffin has been participating in software development for over twenty-five years.  http://www.congruentcompliance.com

James Christie has over 30 years of experience in IT, covering development, IT audit, information security management, project management and testing. He is now a self-employed testing consultant based in Scotland. http://clarotesting.wordpress.com/

Keith Klain is the CEO of Doran Jones Testing and has over 20 years of multinational experience in enterprise-wide testing programs, Keith has built and managed global test teams for financial services and IT consulting firms in the US, UK, and Asia Pacific.www.qualityremarks.com

Ilari Henrik Aegerter is President of the International Society for Software Testing where he wants to bring back common sense into testing and oppose wasteful practices. He has been in software testing for the past 10 years, most of the time as a manager of testers. http://www.ilari.com/

James Denman writes, edits, and manages the production of content for SearchSoftwareQuality.com. His job is one part editor, one part reporter, one part copywriter, and three parts whatever else needs doing. http://searchsoftwarequality.techtarget.com

The Petition to Stop ISO 29119

“Dissent is the native activity of the scientist, and it has got him into a good deal of trouble in the last years. But if that is cut off, what is left will not be a scientist. And I doubt whether it will be a man.”Jacob Bronowski

If you care about craftsmanship in testing or about software quality in general, stop reading this and go sign the petition to stop ISO Standard 29119 from being published in its entirety. If you need more information about why you should sign the petition, watch James Christie dismantling of standards in general from his talk at the AST CAST 2014, “Standards – promoting quality or restricting competition?”

Continue reading

Per Scholas ROI Award

I was incredibly honored to receive the “Person of the Year” award at the 2014 Per Scholas ROI Dinner. Here is the video of what we are doing at Doran Jones and the UDC, and my acceptance speech. Thanks to everyone for a great night!

“Thank you very much. I am extremely honored to receive this award for what really has been a team win. From the mentoring program to the field studies, a lot of people have volunteered their time, tools, and advice to make the Software Testing Education Program happen, and I’d like to take a moment to thank a few who were instrumental in putting this program together.

James Bach and Michael Bolton, who created the Rapid Software Testing methodology, my colleague and friend Paul Holland, who is the principal trainer for STEP, my pal Lorinda Brandon at SmartBear, Josh Lieberman and Vu Lam from QA Symphony, James Lindsay from Workroom Productions, Peter Shih at uTest, the Association for Software Testing, Jerome Dazzel and Tiernan Walsh from Per Scholas, and my good friend Gerry Rajesh from Barclays.

And lastly, I’d like to thank two people who without them, none of this would have happened. Joe Squeri from the Citadel, who as CIO of Barclays encouraged me to get involved with Per Scholas. And my wife Samantha, who’s work with the long term unemployed in the UK was the inspiration for the STEP program, and without her enduring patience and seemingly endless suffering, I wouldn’t be here today.

I started working with Per Scholas about a year ago, and in that time, in getting to know the students and the wonderful people that work there, I’ve spent a lot of time thinking about second chances. Second chances at an education. Second chances at a career. And as we’ve heard tonight, second chances at starting over.

Reflecting on my own life and career, I’ve come to realize, that I have been the recipient of more second chances than I’ve ever deserved. And what I’ve so often taken for granted, is that with every one of those second chances has come an opportunity. And not just an opportunity to learn from my mistakes, or do things differently – but an opportunity to prove that I can do the work. And that is the reason STEP, and I believe Per Scholas are successful, is that with every second chance the program provides, the students get an opportunity to show what they can do.

Working with Per Scholas has been an unbelievable experience, and in many regards, STEP has already been a greater success than I could have imagined and we’re only getting started. I sincerely thank my friend Plinio and the team at Per Scholas for this recognition tonight, but more importantly, for doing what you do best, and seeing the second chance in me. Thank you.”

Why We Love Testing – SmartBear

SmartBear asked software testers what they love about testing – because we love it too.

Special thanks to Peter Varhol, JeanAnn Harrison, Keith Klain, Lorinda Brandon, Nick Olivo, Scott Barber, Mark Tomlinson, Dave Coleman, Alex Bartol, and the Infinio QA team.

Resources:

http://smartbear.com

http://www.ministryoftesting.com

TestBash Speaker Series – Keith Klain

mot-logo400

TestBash is just around the corner now, and to celebrate we’ve got a series of interviews with our illustrious speakers. Next up, Keith Klain:

Please tell us who you are and where you’re from. My name is Keith Klain and I’m a Yank who grew up around Chicago, but has lived in London and Singapore for years…currently residing in CT and working in NYC.

What does your day job look like? Well, I recently left my job at Barclays running their Global Test Center to start up a new testing practice focused on context driven testing, so right now, every day is something new. When I was running the GTC, my day was made up of meetings, calls, operational and budget stuff and trying to send as much time as possible with my testing teams.

When was the last time you actually tested something? What was it? Can you share your approach/thinking/methodology? It’s been a while since all I did for a living was test software, but I try as much as possible to get into projects and stay as close as I could to the approach. I always make the habit of working at least twice a week with either teams or individuals at their desk or attending their meetings to see how we are testing and if its aligned to the principles and values we talk about. I made a decision about 10 years ago to focus a lot of my time on building and running enterprise-wide testing organizations, as I saw it as a pretty big gap in the industry. I also spend a great deal of my time coaching and training test managers because I have a very specific view of that role and their responsibility as care takers of others careers.

How are you/do you/have you observed testing changing over your time in the industry? I don’t think testing has (or should) change that much, I believe people are slowing getting educated as to what testing actually is and how to do it properly.  Sadly a lot of companies approaches to quality hasn’t changed much either since I started in the business 20 years ago, but I like what I am seeing in certain communities and I consider myself an activist in that capacity.

How are you changing testing? I’m trying to say my piece about how to align testing to your business objectives and get the word out about skilled testing or context driven approaches. Having employed probably thousands of testers in my career, I think I know a little about what it takes to be successful in my context, which I define as enterprise IT departments. I am also very passionate about alternative entries into technology careers through outreach and training, hence the involvement with Per Scholas.

How did you get started speaking? My first speaking gig was delivering the practice overview at the UK branch of a consultancy I was working at in front of the entire company! I was so nervous I was shaking before I got on the stage, and it was horrendous! Needless to say, after that performance I threw myself into learning about public speaking and started volunteering to speak in front of people whenever I got the chance. These days, I am very relaxed before I give a talk, but I still practice regularly and try to have high level points I want to get across while keeping it conversational and a little loose.

When was the last time you did a talk to a non-testing crowd? What was the reaction like? If you include presentations a work, I talk to non-testing crowds almost more frequently than testing crowds. I have also done a lot of client/press interviews for banks who want to hear about our approach, spend, etc. The reaction I get is mixed, some folks couldn’t care less about testing so I try to keep it relevant and topical so they can see I know what I’m talking about.

Any tricks or lessons in talking/teaching/coaching about testing? Yes. Be authentic, honest, and talk a LOT about your own personal failures. People are afraid of appearing confused or sharing their own fears, so I usual start right into stories like, “ did I tell you about the time I took down half the stores for retail operation I was testing for?”, or any other of my fantastic fails. That sets the tone for openness and then you can get down to the real work of figuring out how to improve.

Who has inspired and influenced your testing career? What sources have informed your testing philosophy? I am inspired by loads of people who I’ve worked with, the people who are in the trenches doing the work, sorting it out every day. That’s why it’s so important to me to spend as much time as I possibly can with the test teams, so I can see firsthand how it’s going, what the issues are, and it makes me a more effective leader. In my opinion, a lot of test managers in our business want to get as far away from testing as soon as possible, and I think that’s a huge mistake.

As far as influencing my testing philosophy, obviously the context driven school and all the leaders in that community have had an impact on me, but if there was one person I would highlight it would be Michael Bolton. As much as I rant and rave at that poor man, he has such a calm, methodical approach to sorting out problems, and is one of the smarted people I have ever met – I always say he’ll forget more about testing than I’ll ever know J

What do you love about testing? It’s the exploration of the unknown and contains the secret to unravelling life’s mystery’s and your own problems.

What do you hate about testing? Its domination by lazy thinking and carpet bagging snake oil salesmen.

What advice would you give aspiring testers? A good tester to me is humble, curious, honest, and knows how to construct an argument in the classical sense. My advice to anyone wanting to be a great tester is question everything, read A LOT, and get involved in the CDT community. Even if you don’t subscribe to everything that the CDT community believes in, it is a great place to debate, sharpen your arguments and learn. It can be a bit intimidating at first through its reputation for rigorous debate, but I have never seen a group of people more genuinely concerned for the betterment of testers.

How do you relax when you’re not bug-hunting? Ha! Relax? I have a job that keeps me busy and on the road a lot, so with having two boys (ages 7 and 4), most of my time off is split between the back yard, Lego’s, and sleep!

Testing Circus Interview – December 2013

New Picture (1)

Had fun with this interview with Testing Circus…enjoy!

1. Tell us about your journey to becoming a software tester. How did it start and how this has been so far? Was it planned or by accident?

I think of the start of my testing “career” was when I joined a company called Spherion which had a Software Quality Management practice which specialized in testing. They had written a methodology, training, and a support network you could tap into for advice and mentoring. Their approach was basically the V-model and very rigid with lots of documentation filled with wonderful stuff like “phase containment” and test case counting.

Working my way up through the ranks from a test analyst, to automation engineer, to test  manager, to practice director, I had to learn all that stuff well enough to go into the business-side of running a testing practice. That’s very helpful now as I know the arguments for factory style commoditized testing inside and out, as I’ve used them all!

2. When did you realize your passion was software testing?

My passion for testing has always been there, but I think the biggest shift I’ve seen in my approach to testing and managing testers came at Barclays when it really hit me that we are in the knowledge business not manufacturing. I think that is one of most common (and harmful) mistakes that testers and people in IT make when it comes to testing.

Managing people who use their brains to creatively solve problems takes a complete paradigm shift in how you communicate and motivate them. The mistakes I’ve made in the past are not giving people enough autonomy to get their work done and removing fear from the organization structure. Fear is like an odorless, colorless gas that seeps under the door and before you know it, everyone is asleep. In all honesty, I’ve found that the more transparent I’ve been with people on strategy, operations, finances, etc. has actually made my attrition rates go down!

That runs directly counter to the prevailing HR policy of telling people what YOU think they need to know to try to manage them better. My policy is tell them everything and let them manage their own expectations.

3. Do you regret being associated with software testing today? Given a chance would you move from testing to any other field in IT?

Regrets? Never! You regret the things you don’t do, so I would never say I regretted getting involved in software testing. If I had to do things differently, I’m not sure what it would be from a career perspective. I have always been an opportunist, whether it’s moving to London to start up a testing practice despite never having been to the UK or moving my whole family to Singapore when I joined Barclays.

I’m also one of those people who really love our usiness, so I haven’t ever wanted to do any other role and I’m fortunate enough to have a great job that allows me to dive into things when I want, whether it is technology, tools, people or operational stuff.

4. You recently received the Software Test Professionals 2013 STP Luminary Award. Can you briefly go over what that award is?

The award, aside from being flattering beyond belief, describes a luminary as “someone who has inspired others by their actions and the results of those actions on the profession”. Every person nominated for the award has contributed a tremendous amount to the software testing industry, and I am grateful to be counted among their ranks. I am also fortunate to be one of those people who love my industry and have a great job where I get to work with talented colleagues who inspire me every day, so this award would mean nothing without their contribution.

5. You are on the board of the Association of Software Testing (AST), can you describe your role and how you plan to help expand the association?

I am an Executive at Large for the AST, which basically means I am part of the Executive Committee for the AST which recommends and approves actions on behalf of the members for initiatives, spend requests, budgets and general proceedings. I am also the chair of the Grant Committee, will reimburse local volunteers who are doing good things for the software testing community that align with AST’s mission. You can request funds up to $1000 USD for local meetings, peer conferences, workshops – basically anything that contributes to the wider testing community. I tell everyone who will listen to apply for a grant, as it is a great way to develop local communities and it has been a terrific success for the AST it supporting our mission.

http://www.associationforsoftwaretesting.org/programs/ast-grant-program/

6. You have created local AST communities in Singapore & India. Can you briefly tell us about some of the challenges you had creating these communities?

The greatest challenge is getting people to attend anything beyond their daily work life as everyone is so busy! I generally find that the testing community wants to do more for themselves and the industry, but finding the right balance that doesn’t take them away from their families and still adds to their career is tough. The meet up I started in Singapore benefited greatly from the organizational skills of team we had at Barclays and as well, used funds from the AST to get it off the ground. India has a vibrant CDT community, but is stretched in terms of locations, so it feels fragmented to me. Getting any meet up going is hard work, but is infinitely easier with the use of social media, as anyone could tell you when I was developing the Singapore contact database via LinkedIN!

7. Are you planning on creating any more AST communities or meetups? If so, where?

I would love to start more meet ups and always welcome the opportunity to support local communities with what little “internets” stardom I can provide. I really believe that the power of local folks getting together and forming connections with each other is the key to changing things in the testing industry. Right now, I don’t have loads of time on my hands between my day job, the AST, and my current commitment to conference talks in 2014, but I try to transparent about when I’m in town, so let me know and I’ll try to turn up!

8. You have started a petition against ISTQB. What is that about and why did you start that?

Firstly, you have the right (and I believe a responsibility) to ask questions of any company or any person who is trying to sell you something or presenting themselves as experts. I started the petition because I wasn’t getting any answers from the ISTQB board or their vocal emissaries in the industry. My understanding was there were concerns a psychometric study of one or more of their tests showed that the “test reliability coefficient” did not reliably prove students were competent in the syllabus. As well, allegedly a very senior person in the ISTQB/ASTQB was signaling issues with over 100,000 certifications that already been issued!

I still have not got any answers from the ISTQB, and if our industry is going to improve, we have got to start raising our expectations out of the leadership in the software testing community. The ISTQB and ASTQB (and a whole host of others) have for far too long acted like, to paraphrase Ralph Nader, a “sacred cow feeding the public a steady line of sacred bull.” I don’t know how anyone calling themselves a tester, or who cares anything about the hundreds of thousands of people employed in the software testing industry could take a position that questions shouldn’t be asked.

9. What is the alternative of ISTQB? If there is no better alternative (to ISTQB) which is accessible to thousands of testers worldwide, ISTQB is going to grow.

Frankly, I don’t feel it’s necessary to have a “better alternative” in the same model of the ISTQB. There is a wealth of information available for people to develop skills and relationships to become skilled testers. I think the scheme that the ISTQB and its partners who sell the training is fundamentally flawed, and will never become a standard for defining skills in testing. I meet thousands of testers every year, and a great deal of them have taken the ISTQB Foundations course, and by a large majority, I find they don’t feel the exam made them a better tester. Seriously!

So if the goal was creating a shallow certificate for passing multiple choice tests used as keyword search tool for the recruitment of commoditized factory testers? Great job, folks! Success! The core issue is that developing skill and knowledge doesn’t scale It’s not easily packaged and sold to naïve COOs, so I don’t care about creating an alternative to nonsense – we should just call it what it is.

10. According to you, what is lacking in today’s commercialized software testing industry, especially in test management?

An over-reliance on what I call “operational test management” is a huge problem in our industry. I define “operational test managers” as managers of managers, or people who just look at spreadsheets all day to report n metrics and SLAs (I lovingly refer to them as the “coloring in” team). They give the perception of work without actually doing any and contribute very little to the overall conversation about quality and what happened during testing.

There are deeply invested interests in keeping those roles around, primarily due to the massive amount of outsourcing testing has been subjected to over the last decade. Vendors promote them as ways to help scared onshore managers understand and manage what they are doing, but ultimately, they encourage bad behavior from people and inject a load of dysfunction into teams. If I could get rid of anything in the software testing industry, it would be all those terrible metrics scorecards and then all the roles who manage them.

11. What has been your biggest challenge in software testing? How did you overcome it?

Education is one of the biggest challenges due to stereotypes and ingrained bias developed from decades of bad metrics programs, flawed maturity models, and low value testing. Testers have to take responsibility for their own contribution to the problem as well, as we can re-enforce a lot of those perceptions by how we conduct ourselves and inherently limit our value. When I created the BTS University, it was a big step in the right direction for realigning our goals as testers to the business, the objectives of testing (information) and defining the skills needed to redefine our value.

I believe that if you want to drive change in an organization and get congruent action from culturally and regionally diverse teams, you have to focus on what you are contributing to the problem first, articulate your values and principles to give people a lens to view their work, then develop strategies that are aligned to the business you support.

12. What qualities will you look for in a candidate when you want to recruit someone for software testing job?

In my experience the best testers are honest with themselves and others, can speak in stories that tie things together, approach life with humility and their passion inspires those around them. That’s all built with a healthy dose of self-refection, admitting you made mistakes, sharing information, apologizing when you’re wrong. In addition to that, I look for people who study more than the testing industry to broaden their skills and knowledge, especially when it comes to objectivity.

As a Test Manager, you might feel it is your job to “provide an objective view of the quality of the build”, which is a perfectly reasonable position to take, especially if you believe that testers are stakeholders in the company and invested in the success of the project. I would assert that being invested and maintaining objectivity are not mutually exclusive, and in fact, to function properly in your role, it is crucial to mind and manage your own bias. I believe that testing should provide an objective view of the quality of the build – where I differ is my view on WHO should be forming (and communicating) that view.

13. What will you suggest to people who want to join IT industry as software testers?

A good tester to me is humble, curious, honest, and knows how to construct an argument in the classical sense. My advice to anyone wanting to be a great tester is question everything, read A LOT, and get involved in the CDT community. Even if you don’t subscribe to everything that the CDT community believes in, it is a great place to debate, sharpen your arguments and learn. It can be a bit intimidating at first through its reputation for rigorous debate, but I have never seen a group of people more genuinely concerned for the betterment of testers.

14. Name few people you would like to thank, people who helped you directly or indirectly in your career as a software testing professional.

Directly would be James Bach, Michael Bolton, Paul Holland, and Pradeep Soundararajan and indirectly would be Michael Larsen, Griffin Jones, Anne-Marie Charrett, Jerry Weinberg, Harry Collins, Daniel Kahneman, Robert Austin. I also have to thank the 1000s of tester who I’ve worked with over the years and learned far more from than I ever taught.

Wait…what? – Tales from the Testing Dark Side

Always do sober what you said you’d do drunk. That will teach you to keep your mouth shut. – Ernest Hemingway

I recently gave a talk at the EuroSTAR conference in Gothenburg, Sweden about how I feel you can re-frame the perception of your testing effort in your organisation. A big part of the philosophy underpinning my approach is to be honest, frank and up front about what is, and is not working with yourself first – then address anything that comes after that.

Part of my talk is about how bias and perceptions are formed, and I take several (hopefully humorous)  pokes at the software testing industry to illustrate my point. I feel strongly, that we accept far too much nonsense and unverified claims about software testing, and in order for there to be fundamental change, those narratives that bounce around the echo chamber of testing conferences, vendors and blogs have to stop.

Well, I am sad to say, that if the state of proposals I have reviewed recently is any sign of the state of our testing union, we are a long ways away from that change. What follows are statements, claims, and general nonsense from a series of test practices (I will not name names to protect the guilty) from firms who represent…wait for it…over 140,000 software testers in the world! In no particular order, here they are things they said about my organisation (without any knowledge of how it runs):

  • Reduction of testing cycles by 15% year on year
  • Committed productivity gains of 23-39%
  • 30% to 40% reduction in TCO
  • Increased testing productivity by 22%
  • 15% decrease in production defects
  • 27% increase in time to market
  • Attain Level 4 benchmark in TMMi in 2 years
  • Quality & Productivity improvement of 20-25%
  • 22% increase in cost savings
  • 20% cost efficiency improvement through standardization

picard_wtf_riker_i_know

But here are the particularly egregious claims that enter into the realm of pure, unadulterated “WTF??”, including an appearance by my mythical “orthogonal defect predict-o-nator”!

  • Improvement in the defect removal efficiency from 93% to over 97%
  • 20% cost efficiency improvement through standardization
  • Zero Defects in production (Severity 1 & Severity 2 )
  • 90% detection of valid defects in SIT and UAT
  • Decreased testing costs by ~50%
  • Increases to 50% average automation coverage as an industry benchmark
  • Defect prediction model that would predict defects in different stages of testing based on historical data…

This is all good fun, and dismissing them as skilled testers is easy and reminds me of the quote by Neil deGrasse Tyson,  who said, “The good thing about science is that it’s true whether or not you believe in it.” Unfortunately, these companies represent a large part of the global testing population and have collectively a very loud and prominent voice in how people perceive our industry. As well, this doesn’t even get to really nasty stuff in how they under represent what we do as testers and cheapen our value.

Now, I know these proposals are a lot of marketing material that anyone with even a rudimentary set of critical thinking skills could dismantle. But consider this, before my talk in Sweden, I took a walk around the vendor expo before anyone was there, and wrote down a few vendor claims that I shared with the conference during my keynote. After the laughs died down, all in all, there was a fair amount of similarity between those claims and the proposals I’ve seen recently.

As testers, we should be validating what people say their products are claiming, and as an industry, we can (and should) do better to accept responsibility for the narrative and perception of our value. Clearly, others are making that story on our behalf.

Software Test Professionals – 2013 STP Luminary Award

“This is the true joy in life: Being used for a purpose recognized by yourself as a mighty one, being a force of nature instead of a feverish, selfish little clod of ailments and grievances, complaining that the world will not devote itself to making you happy. I am of the opinion that my life belongs to the whole community and as long as I live, it is my privilege to do for it what I can. It is a sort of splendid torch which I have got hold of for the moment and I want to make it burn as brightly as possible before handing it on to future generations.” – George Bernard Shaw

I was recently awarded the Software Test Professionals 2013 STP Luminary Award, which aside from being flattering beyond belief, was a great honor to be included in such an incredible group of nominees. Every person nominated for the award has contributed a tremendous amount to the software testing industry, and I am grateful to be counted among their ranks. The award describes a luminary as “someone who has inspired others by their actions and the results of those actions on the profession”. I am fortunate to be one of those people who love my industry and have a great job where I get to work with talented colleagues who inspire me by their quiet “illumination” every day.

I couldn’t possibly accept this award without acknowledging some of those people who have been instrumental to my success over the years, including all the great testers I have met through the Association for Software Testing and Barclays, but especially James Bach and Michael Bolton. We are products of our influences, and James and Michael have advised, challenged, and encouraged me to speak up and push harder for excellence and skill in testing, for which I owe them a debt of gratitude.

I would also like to thank Plinio Ayala, and all the team and students at Per Scholas, with whom I am fortunate to have met and have the privilege of working. Any contribution I have made to their cause, pales in comparison to their daily heroics to end the cycle of poverty. I also have to thank Joe Squeri for encouraging me to get involved with Per Scholas, and my friends Paul Holland and Lorinda Brandon for their tireless effort and work for the STEP program. The team we have assembled truly believes we are changing the way software testing is taught, supported, and valued by the industry, and I am humbled to be able to spend my time with this incredible group of people.

Finally, I would like to thank everyone involved at STP for putting this award together and to all the people who voted for me. I am sincerely touched by the kindness and spirit of community in the software testing industry, and look forward to seeing you all soon.

Thanks again,

Keith Klain Signature

EuroSTAR Community Spotlight

EUROSTAR

Had fun answering some questions for the EuroSTAR Conference Community Spotlight…enjoy!

Where are you from? I grew up around Chicago IL, but currently live in Connecticut, just outside of New York after spending around 7 years in London and 2 years in Singapore.

Where do you work? Barclays Bank

Can you tell us how you got involved in testing? I think of the start of my testing “career” was when I joined a company called Spherion which had a Software Quality Management practice which specialized in testing. They had written a methodology, training, and a support network you could tap into for advice and mentoring. Their approach was basically the V-model and very rigid with lots of documentation filled with wonderful stuff like “phase containment” and test case counting.  Working my up through the ranks from a test analyst, to automation engineer, to test manager, to practice director, I had to learn all that stuff well enough to go into the business-side of running a testing practice. That’s very helpful now as I know the arguments for factory style commoditized testing inside and out, as I’ve used them all!

How many times have you been to EuroSTAR? Twice, but a long time ago. I used to be more involved in the “public” testing industry as a consultant attending and speaking at conferences, etc. But around 2001-02, I became very disillusioned with the whole testing industry. Maturity models and certifications were really coming into their own then and I couldn’t articulate it then, but it really felt shallow and distracting – almost anti-intellectual. As well, the testing conference circuit is unbelievably boring with the same people saying the same things over and over and over again, so I receded from public life, stopped attending conferences and just focused on building my own teams. I think Michael Bolton has done a terrific job in putting together an incredible program for EuroSTAR this year, and I am really excited to be attending.

What’s your favourite hobby? Right now, most of my down time is spent working on the Software Testing Education Program (STEP) with a great non-profit called Per Scholas. If you want to learn more or get involved, please take a look and let me know as we could always use some help.

Have you any advice to give to a young tester or someone just starting their testing career? A good tester to me is humble, curious, honest, and knows how to construct a good argument. My advice to anyone wanting to be a great tester is question everything, read A LOT, and get involved in the CDT community. Even if you don’t subscribe to everything that the CDT community believes in, it is a great place to debate, sharpen your arguments and learn. It can be a bit intimidating at first through its reputation for rigorous debate, but I have never seen a group of people more genuinely concerned for the betterment of testers.

If you could do a project with one other tester/developer/programmer who would it be? I’m very lucky to work with some brilliant testers like Tony Hall, tester extraordinaire Iain McCowatt, Leah Stockley, Kshitij Sathe, and even Lalit Bhamare from Tea Time with Testers works for me, so I’m not short on talented people in our team, as well as training regularly with testing superstars James Bach, Michael Bolton, and Paul Holland. If I had to pick, I would say the people I would love to do some work with is Moolya and the Kung Fu Panda of testing, Pradeep Soundararajan – and we will at some point! Courage!

If you were stranded on a desert island what 3 things would you like to have with you? My wife and two sons, but if I didn’t want to strand them with me, I would have to say a copy of the complete works of Alexandre Dumas, a case of 25 yr old Macallan, and a large box of Partagas Lusitanias…might as well go out in style!

What is your favourite motivational quote? It has to be this one from Theodore Roosevelt, “It is not the critic who counts; not the man who points out how the strong man stumbles, or where the doer of deeds could have done them better. The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood; who strives valiantly; who errs, who comes short again and again, because there is no effort without error and shortcoming; but who does actually strive to do the deeds; who knows great enthusiasms, the great devotions; who spends himself in a worthy cause; who at the best knows in the end the triumph of high achievement, and who at the worst, if he fails, at least fails while daring greatly, so that his place shall never be with those cold and timid souls who neither know victory nor defeat.”

Know Your Role! – Being Invested and the Art of Objectivity

“We absolutely must leave room for doubt or there is no progress and no learning. There is no learning without having to pose a question. And a question requires doubt. People search for certainty. But there is no certainty. People are terrified–how can you live and not know? It is not odd at all. You can think you know, as a matter of fact. And most of your actions are based on incomplete knowledge and you really don’t know what it is all about, or what the purpose of the world is, or know a great deal of other things. It is possible to live and not know.”  – Richard Feynman

Recently I gave an interview to Duncan Nisbet from Lets’ Test which ranged in topics from my role at Barclays, the work we are doing with Per Scholas, and my talk on “Testing for Confidence” for EuroSTARThomas Hulvershorn had a great comment and observation on Facebook about the idea of providing information without weighing in on release decisions;

“There is one thing I can’t get my head around. I totally get that if your mindset is geared to establish confidence in a build, you’ll miss the point of testing. You are basically checking not testing and on top of that you are hoping that you don’t find bugs. You put that very nicely in your talks and I take that on board.

I do however struggle with the ‘we provide only information’ bit.

In my field of operation it is expected of me to provide a Sign Off recommendation purely from QA point of view. Basically outlining, what has been tested, do we think we did enough testing, what bugs were found, what bugs were left open and then as a final verdict how confident we are that the build is ready to go live.

I can’t really see anything wrong with that tbh. I feel it is our job as Test Managers to provide an objective view of the quality of the build and to say: Good to go from a functional and non functional stability point of view.

Who else should make the call if not us?”

Excellent questions and very valid points. I will try to discuss these issues in turn and break his question into parts to make my response coherent, but this should really be discussed in an online forum of some sort. Also, I would like to add that Michael Bolton has (as in most topics), covered this specific topic much more extensively than I, and is a great source of information in general.

When dealing with situations where it is “expected” to “provide a Sign Off recommendation purely from QA point of view”, I think it is important to differentiate between attesting to the activities that were conducted during testing and providing “sign off” on a release. The former is part of telling the testing story which is an essential skill to develop and gain credibility with your stakeholders, project teams, and peers. The latter, or providing “sign off”, in my opinion is not something the test team should do, particularly due to the source of the information.

Testing provides only part of the information that should be used to make release decisions, but unfortunately, too often (particularly where testing is called “QA”), there is a perception that it is a holistic view of quality. Even information about “what has been tested, do we think we did enough testing, what bugs were found, what bugs were left open” is not enough information to make “a final verdict how confident we are that the build is ready to go live.” There is also a real danger in adding fuel to the bias “fire”, by putting yourself in a position of commenting on the quality of your own work.

As a Test Manager, you might feel its your job to “provide an objective view of the quality of the build”, which is a perfectly reasonable position to take, especially if you believe that testers are stakeholders in the company and invested in the success of the project. I would assert that being invested and maintaining objectivity are not mutually exclusive, and in fact, to function properly in your role, its is crucial to mind and manage your own bias. I believe that testing should provide an objective view of the quality of the build – where I differ is my view on WHO should be forming (and communicating) that view.

So when the question arises, “Who else should make the call if not us?, my answer is simply: your stakeholders. Being invested in a project is not only measured by helping to get a release out the door. It is equally (if not more so), about knowing your role and doing the best you can at delivering value to the organization in that role. Putting the “QA” stamp of approval on a release does not leave any room for doubt , and can set an unrealistic expectation that quality has been “assured”. Is there “harm” in providing “sign off” from a testing perspective? Maybe not, but the likelihood of misinterpretation and and a false impression of certainty are too great a risk and probably outside the remit of objectivity.

 

AST CAST 2013 – The good, the bad, and the cheese curds…

image

My journey to Madison for the Association for Software Testing’s (AST) annual conference (CAST), can be summed up in two words: Paul Holland. Not only was I working with Paul the previous weeks at Per Scholas teaching the STEP class, but he was also the lead facilitator at CAST and little known to me, also my travel buddy. I found out that Paul was traveling on the same flight from NYC to Madison at the same time (7am on Saturday), but better than that, Paul swapped his seats to sit next to me so we could share in our sleep deprived state.

Now, ordinarily, as someone who travels a great deal for work, I rarely speak to anyone on a plane, as it is often the only time I get to read, catch up on videos, or just a moment of silence from my busy life. But if you know Paul, he’s a lot like me: once he gets going, he never stops talking! And we were both highly charged from the previous week together, so I feel really sorry for all the people sitting around us who probably learned more than they ever thought they would about the software testing industry. As this was my first CAST, I wasn’t sure what to expect, but if the trip to Madison was any measurement, I was sure it was going to be a corker. Here are my impression from my time in Madison: the good, the bad, and the cheese curds…enjoy!

image    image

The “bald eagles” of Testing                                   Testing talent at the Hilton

The good…

Let’s start with what was the real star of the conference: the people. I was honestly not ready for how many fantastic testers would be concentrated in one place, and if you like discussing (or arguing) about things, CAST was the place to be. My day (outside of the couple hours of work beforehand) started at around 9am and didn’t finish until after 1am every day. The time was filled with great conversations with extremely talented testers from all over the world and covered too many topics to list. If CAST is about putting the “confer” back into conferences, then they had this down in spades.

The next part of the conference I really enjoyed was the facilitated discussions. Highly unusual in my experience in software testing conferences, but now something I think is vital to learning at them and getting your money’s worth. Most conferences allow Q&A with the speakers if “time permits”, but in my experience, they are usually taken up with people wishing to make statements or are so off topic they are just a distraction. Some of the facilitators did better jobs than others, but when it worked well (which was most of the time) it added to the experience and guided the “open season” section to wring out all the value of the talk.

Another observation I had about CAST which stood out from other conferences I have attended is the number of women not only in attendance, but also participating as speakers. As someone who hires loads of testers, and feels we should be casting a large and diverse net for candidates and opportunities to enter the field, it was particularly encouraging to see so many talented women software testers in one place. Jean-Ann Harrison, Anne-Marie Charrett, Claire Moss, Dee Ann Pizzica, Anna Royzman, Julie Hurst, Alessandra Moreira, Jay Philips, Lou Perold, and Dawn Haynes are all great examples of excellence in testing for everyone in the field.

Speaking of Dawn Haynes, she absolutely killed her keynote on “Introspective Retrospectives: Lessons Learned and Re-Learned”. Honest. Authentic. Full of self-reflection. I was shocked to hear from her that it was her first talk she had given at CAST. It was so easy to connect with her stories and her style was so accessible, I found myself starting to analyze decisions I’ve made and relationships during her talk. You can watch the entire talk here

But the highlight for me was Erik Davis’s talk on “How to Find Good Testers in the Rust Belt”. Forget about probably one of the best presentations I’ve seen in a long time based on visual and technical merit alone. You maybe even gloss over the fact that Erik basically gave a master class in hiring testers ANYWHERE, let alone in the relative isolation of the mid-Cleveland market. But there was no denying, that his honest and funny communication of key ideas: candidate background risks and issues, casting a wide recruitment net, and LOADS of experiential advice on how to hire (and not hire) testers, was world class in its execution. Pay attention conference chairs: Erik Davis is keynote worthy and has the chops to headline a conference.

image    image

“Mr Friendly”                                                 Madison, WI

The bad…

So now for some disappointments from my five days in Madison, and to top the list would be despite my personal experience with great discussion – there weren’t enough of them! Specifically, I mean in during the “open season” portion of the talks which is supposed to be where we get up and ask questions of the speakers. I could only count a handful of times where I felt the speaker was being challenged or a contrarian view was being expressed. Some of the brightest minds in software testing were gathered together in one of the few forums to generate some light (or heat), which means we should be taking full advantage of the opportunity. As I tweeted then, “Hey Testers, if you are not getting engaged with the thought leaders at CAST2013  – you’re doing it wrong!”

All this leads to my next point, which is the large amount of confirmation bias in the discussions I had with speakers and attendees. I realize that there is a high likelihood of this occurring, as we are all self-identified “context-driven” testers, but I was holding out for a bit more controversy. Ranking on the ISTQB (guilty!), ranting about automation, and schools of testing were variations on a lot of the common themes through the days and nights activities. As we grow and mature as a community, I believe we should feel secure in our relationships and scrutinize more of the accepted truths of our world view.

 and the cheese curds…

Finally, as someone who grew up in hostile “Sconi” territory (Illinois), I have to say Madison was a great time with good food, good sites and good beer. My overall impression after my first CAST is pure mental exhaustion with too many ideas to plow through in too short a time. Being surrounded by a veritable “who’s who” of CDT experts was quite an experience, and I look forward to the next one – only with less cheese curds.

CAST

Software Testing Training at Per Scholas

It has been an incredible honor and privilege for me to work with Per Scholas as they partner with Barclays to create the Software Testing Education Program (STEP). The outpouring of support from the software testing community has been overwhelming and our partners in this program have been generous beyond my expectations. If you or your company would like to participate in STEP, do not hesitate to contact me and I will be providing updates as the program continues. Thanks – KK

Per Scholas Launches Software Testing Education Program

The global software testing market is estimated to grow at a CAGR of 21.15 percent over the period 2012 – 2016. To help meet this increase in demand, Per Scholas has partnered with Barclays to create the Software Testing Education Program (STEP) to prepare students to compete for entry-level software testing roles. Per Scholas is a national nonprofit organization that breaks the cycle of poverty by providing technology education, access, training and job placement services for people in low-income communities. STEP will teach selected Per Scholas graduates industry leading testing skills and techniques, provide access to real life projects, and include field studies for participants to learn alongside working professionals.

The STEP curriculum is two weeks of intensive lab-based instructor-led training supported by exercises, field studies, industry experts, and exposure to leading edge software testing tools. Students will be selected for the program after successful completion of the core Per Scholas 15-week curriculum and successful achievement of Comp TIA A+ certification.

Week One starts with Barclays Introduction to Software Testing, which covers quality concepts, techniques and objectives of testing including exercises on Workroom Productions Black Box Testing Machines. Workroom Productions is a London-based consultancy owned by James Lyndsay, specializing in strategies and adaptive approaches for software testing. The week concludes with a three-day course in Rapid Software Testing which was co-authored by James Bach and Michael Bolton and includes topics of critical thinking, oracles, heuristics and test techniques. Pre-reading and coursework for this week includes current white papers and articles from thought leaders in the software testing industry.

Week Two builds on the lessons from Rapid Software Testing with five days of Rapid Testing Intensive. During the course of the week, students will work through several testing projects and challenges using the tools and techniques they would use in a project environment. Key skills developed include test strategies, coverage, defect analysis and reporting. Students will benefit from the experience of Paul Holland, who will be the principle trainer for STEP and brings a wealth of experience from his testing career at Alcatel-Lucent and the many international conferences and workshops he has presented at and facilitated.

STeP Training

All coursework will be supported by Field Studies, onsite visits to working software testing projects via Barclays and partner companies. Students will also get real world access to professional testers through half-day field studies followed by debriefing at Per Scholas classrooms. Test Tools strategies and techniques will be taught throughout the course and use industry leading software and training from QA Symphony and SmartBear. To make sure students have a working knowledge of a basic tool set, topics covered will include test management and execution, basic test automation, and defect tracking.

In addition to the Career Support provided by Per Scholas, uTest are inviting STEP graduates to attend their uTest “Sandbox” program – a real-world testing exercise – where they can earn money for submitting valuable bugs and receive professional feedback on ways to improve their skills. uTest provides “in-the-wild” software testing to companies of all shapes and sizes through a global community of 80,000 professional testers in 190 countries and students will be able to gain valuable experience on live projects as they transition into full-time testing roles.

A one-year membership to the Association for Software Testing will also be provided to students where they can gain access to industry conferences, videos, and professional testing resources.

Sponsorship can be provided by companies or individuals wishing to participate in STEP through multiple opportunities. Companies may wish to provide facilities for Field Studies which will require 4 hours for students to visit a live project and observe software testing being conducted on a real world project. Alternatively, organizations or individuals may participate in the STEP Mentoring program which pairs graduates with senior test professionals for career guidance and development.

Survivorship, Best Practices and The Power of Wish Thinking

“The scientist has a lot of experience with ignorance and doubt and uncertainty, and this experience is of very great importance, I think. When a scientist doesn’t know the answer to a problem, he is ignorant. When he has a hunch as to what the result is, he is uncertain. And when he is pretty damn sure of what the result is going to be, he is in some doubt.”Richard Feynman

I recently had the distinct privilege of watching an expert tester at work. I wouldn’t call this person a “test manager” or “test lead”, even though what they were doing would probably be categorized as an activity associated with both of those roles. No, I would give them the honor of calling them an expert tester – someone using all of their knowledge and skills developed through years of practicing their craft. And they weren’t even testing software; they were testing ideas. Testing assumptions. Testing people. Testing themselves. It was a thing of beauty.

In my almost 20 years of working in this field, I’ve met only a handful of people who could have been dropped into this scenario and come out the other side alive. I didn’t have a scrap of paper to give this person and had not had a single discussion with the project team yet. No preparation. No context. Nothing. Just a room full of COOs, quants, and extremely senior business sponsors all looking for someone who could help them. The elegance of approach and poise demonstrated quickly punctured the tension in the room, and they were free to set about their work – asking excellent questions and not stopping until they were satisfied.

What struck me the most after the meeting, was the feeling of excitement and exhilaration from everyone in the room. Foolishly worrying that I had set everyone up for failure, I had forgotten one of the hallmarks of an expert tester: they are comfortable with complexity and ambiguity. They thrive on it. And they don’t need “best practices” or a two-inch thick manual on how to get the job done. Their weapon of choice is their mind.

And there lies the difference between the prevalent “best practice” and insurgent “skilled testing” communities. In my experience, the people who espouse “best practices” are the ones that do the least amount of learning and practicing their craft. A skilled tester is comfortable with ambiguity and does not hide behind it or disguise meaning through use of nonspecific language. People obsessed with “best practices” look towards quantification as a way of improving numerical efficiency, and use words like “smug” to define those who feel “Not everything that you can measure matters, and not everything that matters can be measured.”

This approach is similar to what Nasim Taleb describes as a “ludic fantasy“, or incorrectly applying statistical models where they don’t belong, and I believe leads the “best practices” crowd into state of “survivorship“. In that state, despite a complete lack of evidence and scientific analysis, you want to believe that your strategy and practices were the reason you were successful – simply because you didn’t fail. We are consistently “fooled by randomness” to think that we actually know something, and can apply standards and practices without context or skill and get a better result. Wish thinking is a powerful force.

This has been the prevalent approach to software testing for the last decade or more, and fuels the hordes of consultants and testing vendors, foaming at the mouth to commoditize, package up and volume discount testing “best practices”. But when you start to peel back the layers and get into the details, ambiguity abounds and the models quickly fall apart. The hard work to actually “know something” hasn’t been done. What everyone who understands anything about good testing will tell you, is that it is not the practices themselves that make you successful – it is knowledge of their skillful application.

And that knowledge requires hard work and practice to obtain.

“True ignorance is not the absence of knowledge, but the refusal to acquire it.” – Karl Popper

ISTQB Petition Comments

The following are some of the comments I’ve received from the signers of the ISTQB Foundations Exam Review petition. They are obviously biased, but interesting themes are developing. Chris Carter (another training provider) has just been elected president of the ISTQB, so please take a moment to send him an email (chris.carter@istqb.org) to let him know you’d like some answers. Enjoy!

“I took this exam at the start of my career in software testing about 8 years ago. I learnt lots of inaccurate things about software testing as a result which made me make quite a lot of bad decisions. The people backing qualification are not really testers, just people out to make a fast buck. ”

“Let the debate end. Time to be transparent.”

“Great initiative, should be public information based on their claimed values!”

“Sunlight is the best disinfectant.”

“I have always seen that there has been no value addition in terms of testing a software better in this certification. Not against the certification but, I am personally against the way it is being done. Thanks!”

“I did CSTE when I started my career in software testing. Fortunately, for me process of getting CSTE was useful as I formed a study group and discussed / shared experience over the weekend (In 2002) – However, I never liked the CBOK or the format of exam – did not bother with renews / certifications again.”

“It is almost impossible to carry the torch of truth through a crowd without singeing somebody’s beard.” – G.C. Lichtenberg

“Valid questions that should be answered.”

“A lack of transparency results in distrust and a deep sense of insecurity.” -Dalai Lama

“I would like to know: “Is their any validity of the certification?” – Is that ISTQB making companies compulsory to have for testers? Does the syllabus gets refreshed?”

“As an ISTQB Foundation Level Certificate graduate, I would very much like an answer to these questions.”

“It is my understanding that different regions set their own exams. Are reliability coefficients applied across these differing regional versions?”

“Any new information about supporting or negating the validity of these exams is a good idea. I hope you get detailed answers.”

“For clarity. Can see no reason why there should have to be a petition in the first place.”

“I have taken the ISTQB Foundation exam and want these answers for myself as well as others.”

“I didn’t take any ISTQB certification exam and I guess these were the basic questions which were bugging me and stopping from doing that. So yes indeed I would like to know the answers of these questions. Thanks for initiating it.”

“I’ve been challenging the ANZTB (Australia & New Zealand Testing Board) for over 10 years. Something needs to change.”

“I would like to know the answers to these queries too…. am too very curious to know who is building palaces from the money that we spend on certifications.”

“I would love to know these answers. I have never been satisfied interviewing people flashing ISTQB certificates. I find their knowledge and understanding of the subject to be very shallow.”

“I took the old ISEB exam, but the question still stands with ISTQB. How valid are our qualifications? It was just a foot in the door for many roles for companies that would not even interview you unless you had it.”

“Really, guys, what do you measure except the ability to memorize “correct” answers?”

“ISTQB has to become an open body sharing maximum information. Else it will become irrelevant.”

“Solid answer for the questions are needed. as a tester we are not money itself but our time, energy and hard work there…”

“I really dont understand how certifications make good testers.”

“I believe that certifications per se do not help improve testing skills. However, it is sad to note that they form an important criteria in landing a job. This needs to change.”

“Certificate of this sort is totally unnecessary for real tester who can prove the skill on demand. It is really possible for a non tester who has not done any testing in reality to pass the exam.”

“I wont encourage the ISTQB certification…..As I have seen many people who certified ISTBQ, doesnt know basic testing…I am not sure what ISTQB syllabus is all about…….I dont want to spend too much time in wasting time in writing on ISTQB…Since I have lot more to learn others things which context driven community is telling to become expert tester.”

“I agree. ISTQB FL exam is never helped me to be a better tester. Its just a piece of paper every organization wants before we can join them.”

“The Questions are very important and the response would make it more credible .”

“You know there are big problems with this. Try coming clean.”

“Given that the ISTQB Foundation level exam is now a minimum prerequisite for most test positions and contracts, the answers to this petition are essential to transparently demonstrate whether or not the certification is valid in the ‘real’ world.”

Strange Bedfellows

All over the place, from the popular culture to the propaganda system, there is constant pressure to make people feel that they are helpless, that the only role they can have is to ratify decisions and to consume. – Noam Chomsky

Earlier this year, I wrote about the “bizarre public spectacle made up of innuendo, accusations and irony” that the context driven testing website seems to have spiraled into lately. So when I saw the latest missive launched at my questions for the ISTQB, it wasn’t surprising that it was filled with the usual snark (Rabid Software Testing) and name calling (twits). If you can get past the juvenile antics, there are a couple of points that although, mostly irrelevant to the discussion, call for a response.

Firstly, you have the RIGHT (and I believe a responsibility) to ask questions of any company or any person who is trying to sell you something – whether they will answer is their business. I find it repulsive to be asked to withdraw a petition for answers to questions that even the company that conducts the evaluations of the exams said were valid, correct, and should be ASKED and ANSWERED. I don’t know how anyone calling themselves a tester, or who cares anything about the hundreds of thousands of people employed in the software testing industry could take a position that question shouldn’t be asked.

Secondly, as rightly pointed out, companies do have a right to study the quality of their products in private. No one is disputing that fact. You could easily dismiss this, if the private testing that was done was for products that were GOING to be released. But seeing as they were mentioned, an interesting point is raised here about the alleged material that was reviewed in 2010. To quote:

“The gist of the materials was that ISTQB had commissioned a psychometric study of one or more of their tests, that a “test reliability coefficient” was a bit low, and that ISTQB was planning to use this information to improve the quality of their exams.”

My understanding was there were concerns that the exam did not reliably prove students were competent in the syllabus. As well, allegedly a very senior person in the ISTQB/ASTQB was signalling issues with certifications ALREADY ISSUED! How many certifications were issued with potentially sub-standard exams? The figure I believe quoted is…wait for it…

Over 100,000 certifications!

Of course, as again was pointed out, “the materials were not authenticated and there might not be any truth in them at all”, and all this could be easily cleared up with some transparency from the ISTQB / ASTQB. But the evasive answers and deflection coming from the ISTQB / ASTQB representative warrants further investigation. I believe that you have a right to know if you, your company, or your trainer spent money on an invalid exam and what the ISTQB / ASTQB has done to correct the situation. Let me be perfectly clear – this issue is not about trade secrets, cheating, or statistical analysis of their claims. With enough education and communication, people should be able to decide the value of those certifications on their own.

And finally, can we please start raising our expectations out of the leadership in the software testing community? The ISTQB and ASTQB (and a whole host of others) have for far too long acted like, to paraphrase Ralph Nader, a “sacred cow feeding the public a steady line of sacred bull.” Here’s some good ideas: Stop preying on people trying to get a job and start helping people develop skills for a career. Stop talking about your values and start living them. Yes, I’m pissed off. You should be too. And yes, I’m unreasonable. You should be too.

“Reasonable people adapt themselves to the world. Unreasonable people attempt to adapt the world to themselves. All progress, therefore, depends on unreasonable people.” – George Bernard Shaw

It can be especially disheartening when you see people who were once lions in our industry and leaders in tester education, twisted into caricatures of themselves by their new bedfellows. But as testers, we keep asking questions until we are satisfied, as someone for whom I have a great deal of respect once said:

“Software testers are professional skeptics. To require them to adopt a compliance mentality, in which they set aside issues of ambiguity, oversimplification, unstated assumptions or controversial conclusions in order to provide the answer expected by an examiner is to demand conduct so far removed from what testers should do as to be invalid on its face.”

That person was Cem Kaner.

Certifiable – Fighting the fights worth fighting…

“A body of men holding themselves accountable to nobody ought not to be trusted by anybody.”Thomas Paine

If you have followed me lately on Twitter, you may have noticed a slight, well let’s say, fervor pursing answers to the questions I posed to the ISTQB. Since publishing that letter a little over a week ago, an important conversation in the software testing community has been reignited over Twitter, LinkedIn, multiple blogs, and loaded up my inbox. And that conversation is NOT about testing certifications or the rackets employed to “regulate”, train, and issue them. Let me be clear, the certification debate is very important, but it is a symptom of a disease in our business: the disease of not owning our value proposition.

I agree completely that testers needing to improve their ability to articulate their value to the business, as we are still losing this battle. There are plenty of voices outside our industry telling us software testing is too expensive, not aligned to the business, a commodity, or if we could just automate all these scripts we could fire all the testers! And to make matters worse, one of the loudest voices INSIDE our industry, these self-appointed “qualification boards” selling 40 question, multiple choice exams with a passing grade of 65% say:

Your company’s return on investment (ROI) for ISTQB Software Tester Certification is outstanding.”

You will have the peace of mind knowing that your team has the knowledge and skills for practical day-to-day testing.”

“Your testers will have clear testing standards and a clear career path.”

“ISTQB certification ensures your testers have what it takes to get the job done.”

Are you kidding me! The reason I am demanding answers to my questions (as should you), is because they go right to the heart of their argument about testing skills and value. The ISTQB/ASTQB at this point, refuses to let us know if there have ever been issues with the reliability co-efficient of their exams and how (and how often) they are evaluated by third parties. What that means if the results are deficient, is that the exam does NOT reliably assess whether someone has mastered the learning objectives and material in the syllabus. Or plainly, it doesn’t do what it says on the tin. When asked about the reviews and whether there have ever been any issues, Rex Black answered “NDA prevents a detailed response. Is “nothing is perfect” not clear enough?”.

Sorry, Rex, that’s not even remotely good enough.

And that’s why getting answers out of the ISTQB/ASTQB is important, especially for:

…the ISTQB/ASTQB: because if a self-appointed board of a non-profit that’s filled with people who offer training in the “certifications” they issue wants to be taken seriously, then they need transparency and accountability. You are not helping the industry or software testers by proclaiming your exam “is a certification of competencies.” It’s intellectually dishonest at best, and even a passing investigation into how your organization is regarded outside of the ISTQB/ASTQB “bubble” should let you know you can do better.

…the business we support: because the information that testing provides is important to making good decisions about technology and needs skills rote memorization won’t provide. We’ll make you a deal, we’ll start getting better about articulating value and risk to your business and you stop treating the test team as a dumping ground and something that should deliver “ROI”. Developing software is not analogous to manufacturing, so please stop trying to turn people into widgets you can shop around for the lowest price.

…the software testing industry: because we know we can do better and owe it to ourselves to stop accepting mediocrity. Unchallenged ideas abound in an industry that is supposedly filled with critical thinkers. The lack of dialog and what passes for new ideas in our field just adds fuel to the flames that we are disconnected from the business we support. It also allows every other discipline, or anyone with letters after their name to run roughshod over us and tell us what we should be doing and even worse – how to test.

…and most importantly, software testers. You are undermining your own credibility by thinking that certification is anything more than a keyword to search for in a job board. Additionally, by accepting such an incredibly low standard for what is considered “foundation” level knowledge of software testing, you have effectively cheapened your craft. I appreciate that many organizations and business are using certifications as a screening device, but we can collectively push back and reject shallow attempts at capitalizing on ignorance. It is our responsibility to everyone who has or will, work as a software tester.

I don’t know about you, but I’m sick and tired of everyone outside of the software testing community defining our industry. I’m sick and tired of having our craft boxed up and “commoditized” by people who don’t understand what we do and only look it at as Jerry Weinberg would call “the “appearance of work. (long hours, piles of paper, …) “. And I’m absolutely fed up with self appointed “experts” telling us they care about software testers while putting a ribbon and bow on our jobs for people to devalue our craft.

Had enough? Well, then let’s do something! Sign the petition or write a letter to the ASTQB and ISTQB to get some answers, and let’s take back the value proposition of software testing.

More to come!

“There is no reason to accept the doctrines crafted to sustain power and privilege, or to believe that we are constrained by mysterious and unknown social laws. These are simply decisions made within institutions that are subject to human will and that must face the test of legitimacy. And if they do not meet the test, they can be replaced by other institutions that are more free and more just, as has happened often in the past.”Noam Chomsky

An open letter to the ISTQB

Date: April, 25 2013

To: ISTQB BOD

Cc: ISTQB Governance Working Group

Subject: Open Letter to the ISTQB

To whom it may concern;

Recently a discussion transpired over Twitter regarding the validity and governance of the Foundation level exam you offer through your training partners. Rex Black, a current board member and past president of the ISTQB, was involved in the exchanges and made the following comments in response to my queries about whether there have there ever been problems with the certifications validity, specifically the reliability coefficient:

@RBCS: ASTQB works with professional exam consultants (psychometricians) to ensure statistical validity

@RBCS: They are reviewed continuously by ASTQB. Nothing is perfect, but exams are constantly perfected

@RBCS: NDA prevents a detailed response. Is “nothing is perfect” not clear enough?

@RBCS: Non-disclosure agreements prevent detailed answers; I have answered as directly as I can.

Rex did not answer my questions purportedly due to an non-disclosure agreement. Per the ISTQB website, you are a “non-profit association” dedicated “to continually improve and advance the software testing profession”. It would lead me to believe that your values of “openness” and “integrity” would mean answers to those questions are vital to maintaining your charter.

So I appeal to this board for answers to what seem to be straightforward questions:

1) Have there ever been issues with the ISTQB Foundation exam reliability coefficient reviewed by your exam consultants Kryterion?

2) Have the reliability co-efficients consistently shown, since the inception of the ISTQB’s certification program, that results on the certification exams accurately measure the testers’ knowledge of the syllabi?

3) Have there ever been any other issues with the validity of the exams?

4) How often do those external reviews take place?

5) Are the results of Kryterion’s (or a third party’s) independent evaluations publicly available?

Rex suggested that acknowledging potential obstacles for testers due to issues with the Foundation certification was akin to being “prejudice (sic) against the 300,000 people who have ISTQB certs.” I would assert that not answering basic questions about threats to the certifications validity does those 300,000 people a greater disservice.

Thank you for your help in getting these questions answered.

Best regards.

Keith Klain

Bursting CDT Bubbles

“The  single biggest problem in communication is the illusion that it has taken  place.” – George  Bernard Shaw

Conference season has started again, and I’ve made some rounds giving talks at QAI QUEST in Chicago and STAR Canada in Toronto. I had a great time at both talking about problems with bias towards software testing (both positive and negative), and what we and the industry do to support them. But despite all the great conversations I had with colleagues and people I met for the first time, it became clear to me that the context driven community needs to do a better job getting the word out.

More often than not, my questioning of ideas that the CDT community take for granted as open for debate (test case counting, DRE, detailed test scripts, etc.) were causing gasps of horror and stares of disbelief! The big problem was that they are all accepted as settled law – almost beyond the realm of questions. This reaction seemed to confirm a point in my talk that the majority of testers on the planet are not working on sexy agile projects using cool techniques and tools. No, the majority of testers work on mediocre projects, with unenlightened teams, run by “operational test managers” who don’t use new technology and probably made them get “certified”.

The CDT community (and frankly, all the leaders of the testing world) owe it to those people to burst the bubble we tend towards and get religion! Form some connections! Get out there and join the fray! Now, I’ll give exemptions to the war horses of the the CDT movement, especially James Bach, Michael Bolton, pretty much the entire AST BOD  (and some select members) and the Let’s Test folks. There are some other notables I will undoubtedly miss off that list (anyone here), but for the rest of us? Really?

I’ve recently taken some shrapnel for participating in a public debate on Twitter about testing metrics/certifications and threats to their validity. The point was made (by multiple people) that maybe we should just stop fighting and agree to disagree. Nope. No way. I want to be “in the arena“, and I may get kicked around in the process, but for too long bad ideas about testing have gone unchallenged and its time to reinforce the front. I believe that the context driven community has the best ideas about how to manage and execute software testing and our community is truly an open forum for debate and exchanging of ideas.

Following people on Twitter is great place to start. Keeping up with the STC or attending CAST should be on your short list of ways to support the “skilled testing revolution”. Be we should be going further. There are loads of conferences where hard questions need to be asked and ideas challenged. As the AST likes to say, put the “confer” back into the conferences! Start a blog – or maybe comment on a colleagues. Any way you do it, start hammering away at these unchallenged ideas! No one is going to give it to us, we’re going to have to take it. I understand its difficult, but its so worth it, and as I’ve said multiple times this year already: changing culture is hard – but we’re gonna do it anyway!

Now what are you going to do about it?

“The longer I live, the more I am certain that the great difference between the great and the insignificant, is energy — invincible determination — a purpose once fixed, and then death or victory.”Sir Thomas Buxton

Commoditization, Transformation, and Testing Skills – Questions and Answers with Matt Heusser

Recently I had the distinct honor of being interviewed for an article in CIO.com by Matt Heusser of Excelon Development. Multiple conversations and parts of the interview were used to inform the article, so for what its worth, the complete Q&A is what follows. Enjoy!

MH: You’ve said in the past that the ‘old commodity’ model of testing is ‘breaking down.’  What do you mean by that, and what do you think is replacing it?

KK: A large piece of the software testing market is delivered by “test factories” that are premised on an analogy comparing testing to manufacturing, hence the desire to “commoditize” the role. Rapid deployment delivery models, risk based testing, and the increased adoption of agile methodologies strike directly at the concept of testing as a commodity, as you have to be highly skilled to operate in those environments.

As well, over the last fifteen years or so, software testing has frequently been prioritized to adopt outsourcing and off shoring extensively, and the financial models used to justify that decision are leveling out due to rising wages, cost of living increases, and currency fluctuations. Most of the improvement models used to rationalize the commoditized testing approach use strictly quantitative metrics to assess quality or measure improvement; an approach which breaks down rather quickly beyond any first order metrics.

There is an increased focus on business value and testing skills, which means you have to bring more to the table than just the ability to do it cheaper.

MH: You’ve started a test transformation process at Barclays. What does that mean, exactly? 

KK: Barclays is very serious about software testing. The amount of management support we get in the Global Test Center (GTC) is unprecedented in my nearly 20 years of working in the software testing industry. Because of that, there was a wealth of talent here to build on, so the transformation process has been more evolutionary than revolutionary in nature.

Our main concerns are ensuring that our test approach is aligned to the business we support, our tools and process are lightweight and can handle multiple project types, and that we are hiring the best testers in the industry.

Part of that transformation has been developing what we call a “culture of professional testing” which drives how we recruit and develop our testers. Our GTC University focuses our training, coaching, and mentoring programs on testing skills like heuristic test strategies, visual test models, exploratory testing, and qualitative reporting.

MH: What were you doing before Barclays? What got you excited about testing – and changing the test process in specific?

KK: I’ve worked in various quality management roles in financial services firms and software testing consulting organizations, living in the US, Europe and Asia. I’ve always felt testing has one of the most important roles to play in technology as they support their business, because we provide information that can be used directly to manage risk.

A problem with the commoditized approach to software testing is that it inherently devalues people in a creative, intellectual process and fundamentally doesn’t deliver on its responsibility to articulate risk in business terms. I think the testing industry is fundamentally changing for the better and there hasn’t been a better time to be a software tester.

MH: What are the biggest barriers, cultural, social or technical, to this kind of change?

KK: Education is one of the biggest barriers due to stereotypes and ingrained bias developed from decades of bad metrics programs, flawed maturity models, and low value testing. Testers have to take responsibility for their own contribution to the problem as well, as we can re-enforce a lot of those perceptions by how we conduct ourselves and inherently limit our value.

I believe that if you want to drive change in an organization and get congruent action from culturally and regionally diverse teams, you have to focus on what you are contributing to the problem first, articulate your values and principles to give people a lens to view their work, then develop strategies that are aligned to the business you support.

MH: Has the skill needs of your testers changed over time? Is recruiting skilled testers hard?  If you’ve been building them, what is the change like? What do you do to build them?

KK: I think the focus on specific technologies constantly changes over time as things go in and out of vogue, but I do believe there is an increase in demand for “skilled testers”. Finding good people is always the greatest challenge, and we go through a lot of candidates before we find the right ones for the GTC, especially as we are looking for people with a different skill set.

We built the GTC University to have a great training, coaching, and mentoring program to get people up to speed on the business, technical and testing challenges of our environment. One of the best benefits of the GTC has been an explosion of community support for the testers. There are thriving software testing oriented focus groups, brown bag sessions, social committees, and testing competitions; including the year-long GTC Super Tester Challenge whose finale last year was judged by James Bach!

5 Questions with Keith Klain

Many thanks to Phil Kirkham at Expected Results for letting me take part in his “5 Questions” series today…enjoy!

1) You seem to have become very active online recently with your Twitter account and blog – why did you decide to do this? What have you got out of being online and what are your impressions of the online tester community ?

About 10 years or so ago, I used to be more involved in the “public” testing industry as a consultant attending and speaking at conferences, etc. But around 2001-02, I became very disillusioned with the whole testing industry. Maturity models and certifications were really coming into their own then and I couldn’t articulate it then, but it really felt shallow and distracting – almost anti-intellectual. 

As well, the testing conference circuit is unbelievably boring with the same people saying the same things over and over and over again, so I receded from public life, stopped attending conferences and just focused on building my own teams.

Around the same time I became a closet disciple of James Bach and the context driven community after reading Lessons Learned in Software Testing. I used my time as the Head of QA for Equities IT at UBS Investment Bank to try new things like visual test strategies, etc. and made LOADS of mistakes there. 

After taking the job at Barclays, I realized very quickly that I was going to have tremendous senior management support and a real shot at building a testing team the way I’ve always wanted to run one. We worked very closely with James Bach and Michael Bolton in defining our training regime to focus on testing skills and applying CDT principles in a big way.

My boss at the time actively encouraged me to get out there and talk about what we were doing as recruitment tool as we were struggling to find people with an open mind. All that as well as some mild prodding from Michael and James to talk publicly about the success we were having, got me re-engaged with the testing community. 

I think it’s due to the questioning nature of our business and the people it attracts, but I love the online testers and the testing community. I’m a big advocate for testers in general, and think it’s important to have counter examples as so much in the testing industry is harmful to testers. Getting the GTC story out there as an example of how things can change for the better (although it’s not perfect here) has become part of my advocacy.

2) How did you start off your testing career and how has your thinking on testing changed since then? Were you ever a testing zombie?

I think of the start of my testing “career” was when I joined a company called Spherion which had a Software Quality Management practice which specialized in testing. They had written a methodology, training, and a support network you could tap into for advice and mentoring. Their approach was basically the V-model and very rigid with lots of documentation filled with wonderful stuff like “phase containment” and test case counting. 

Working my up through the ranks from a test analyst, to automation engineer, to test manager, to practice director, I had to learn all that stuff well enough to go into the business-side of running a testing practice. That’s very helpful now as I know the arguments for factory style commoditized testing inside and out, as I’ve used them all! 

I would never call myself a zombie, as that implies a mindlessness that I’ve never suffered from, but I definitely had a period of “un-enlightenment” about the mission of testing.

The biggest shift I’ve seen in my approach to testing and managing testers is that we are in the knowledge business not manufacturing. I think that is one of most common (and harmful) mistakes that testers and people in IT make when it comes to testing. Managing people who use their brains to creatively solve problems takes a complete paradigm shift in how you communicate and motivate them. 

The mistakes I’ve made in the past are not giving people enough autonomy to get their work done and removing fear from the organization structure. Fear is like an odorless, colorless gas that seeps under the door and before you know it, everyone is asleep. 

In all honesty, I’ve found that the more transparent I’ve been with people on strategy, operations, finances, etc. has actually made my attrition rates go down! That runs directly counter to the prevailing HR policy of telling people what YOU think they need to know to try to manage them better. My policy is tell them everything and let them manage their own expectations.

3) What was your biggest challenge in making the changes at Barclays? Did you get any pushback from the testers that were there?

Education is one of the biggest challenges due to stereotypes and ingrained bias developed from decades of bad metrics programs, flawed maturity models, and low value testing. Testers have to take responsibility for their own contribution to the problem as well, as we can re-inforce a lot of those perceptions by how we conduct ourselves and inherently limit our value. 

I believe that if you want to drive change in an organization and get congruent action from culturally and regionally diverse teams, you have to focus on what you are contributing to the problem first, articulate your values and principles to give people a lens to view their work, then develop strategies that are aligned to the business you support.

Funnily enough, when we cancelled all the metrics and maturity programs, some of the loudest protests were from the testers! And that’s because they didn’t know how to measure themselves outside of purely quantitative means; they couldn’t articulate their business value. Most of the folks left fairly early on but some stuck with it and are contributing in a really meaningful way now. 

My team has been absolutely fantastic in trusting me in making these cultural shifts, and it’s been the best job I’ll probably ever have. I am extremely fortunate to have the team around me that I have now and any success or recognition is down to their hard work and dedication.

4) What book(s) are you currently reading – and why are you reading them?

Right now I am reading two books “The Invisible Gorilla” by C Chabris and D Simons and “Thinking, Fast and Slow” by D Kahneman. “The Invisible Gorilla” is about inattentional blindness and how it impacts what we believe we know about memory and observing things – both great topics for testers. 

Michael Bolton is an incredible reference for books that can inform the way you test and he’s probably given me a dozen great suggestions that I’m working my way through and I also put a bunch of stuff I consider “required reading” on my website.

5) You say in this blog post that the greatest challenge is finding good people – why do you think it’s hard to find good testers? What makes a good tester to you – and what advice would you give to a new tester who wants to become a great one?

It’s hard to find good people for several reasons, but primarily, we are looking for people that are not coming at a task with a prescribed outcome in mind. The CDT community is relatively small and finding them at all is hard, and then try adding in people that are in the right country at the right time, and it’s nearly impossible! 

A good tester to me is humble, curious, honest, and knows how to construct an argument in the classical sense. My advice to anyone wanting to be a great tester is question everything, read A LOT, and get involved in the CDT community. Even if you don’t subscribe to everything that the CDT community believes in, it is a great place to debate, sharpen your arguments and learn. It can be a bit intimidating at first through its reputation for rigorous debate, but I have never seen a group of people more genuinely concerned for the betterment of testers.

Leadership in Testing – What Really Matters

Special thanks to the awesome folks at The Testing Planet for publishing the following story in their latest issue. Get yourself together and subscribe today!

Leadership in Testing – What Really Matters

I’ve hired lots of testers. I’ve hired some great ones, and some well, not so great ones. Some that exceeded all my expectations for them, and some that I thought were bound for “greatness” and fell short of the mark. Consistently, the one quality that I see distinguishing the ones who reach their full potential from the ones who don’t: leadership. I prefer to think of leaders using the definitional term “guide” when describing them. They play different roles under different contexts, but always guiding the organisation, whether it be a team or an individual towards the goal.

Now, it is a very common mistake to conflate leadership with management. A leader can be a manager as well, but as we all know, being a manager does not mean you are a leader. We’ve all struggled under managers who didn’t have a leadership bone in their body, so to avoid inflicting that terror on my teams, the following are characteristics I am looking for in either hiring or promoting leaders:

1) Honesty – I speak a lot about honesty because it’s so important to leading with integrity. It resonates into every aspect of how others see you, and how you see yourself. People want to know that their leaders are telling them the truth to trust them to act as a co-steward of their career. And that trust is built with a healthy dose of self-refection. Admitting you made mistakes, sharing information, apologizing when you’re wrong – good leaders have no fear of the truth. Honesty is the building block on which you’ll build great teams, and it has to start with its leaders.

2) Communication – All great communicators are not leaders, but all leaders are great communicators. Setting the context for the mission is essential to keep people motivated and aligned with the business, and that means you have to be able to relate goals to tasks. People who tell stories that find common threads in our shared experiences are typically the ones who get the most from their teams. In order to propagate an idea, it must be relatable to something we value ourselves.

3) Humility – History is full of examples of leaders with tremendous egos. In order to even want to be in a leadership position, you must have a healthy sense of self-worth. But I think the best leaders can drive organisational change, not as programmatic coercion, but as Dwight D. Eisenhower called “the art of getting someone else to do something you want done because he wants to do it.” That kind of leadership demands humility. A great tell on whether someone has a humble spirit is if they use “I” and “we” interchangeably when they speak about earlier teams, or give a pat answer when you ask them about their last mistake. I want my teams to take ALL the credit because they are the ones doing all the work!

4) Passion – People look to their leaders to keep their foot upon the accelerator, setting the pace for the organisation or team. Passion is what inspires people, and inspired people can do amazing things. I am extremely fortunate that I love my job. But what exactly is my job? My job is helping organisations and people improve themselves through great software testing. I tell my teams that we are not only responsible for improving testing on our projects, but also in the industry. Nothing less! If you’re not passionate about what you are doing, trust me, no one is going to follow you – regardless of your title.

In my experience the best leaders are honest with themselves and others, can speak in stories that tie things together, approach life with humility and their passion inspires those around them. I’ve failed more than I’ve succeeded in finding leaders, but when I have been successful, they’ve met those marks. Best of luck and happy hunting!

The Confidence Game – What is the Mission of Testing?

Doubt is not a pleasant condition, but certainty is absurd. – Voltaire

Maybe it’s due to an extension of my tendency towards skepticism to myself, but I get really uncomfortable telling anyone that something is certain. That is especially true when it comes to software and interpreting the results of testing. There are just too many variables that impact the control and validity of the output, and that’s just limited to what we can know – let alone the things we don’t know! The great “unknown unknowns” loom in the shadows, waiting to rear their head and question our approach and as well – shake our confidence.

By definition, confidence is the quality or state of being certain. It’s knowing that something can be proved true, and is a by-product of actions taken in the process of acquiring that proof. Christopher Chabris and Daniel Simons created a famous experiment in studying inattentional blindness. In their book The Invisible Gorilla, they posit that we should be very unsure of what we are certain we know, and that our confidence or intuition can often mislead us. The idea of questioning the origins of our confidence is also echoed in Blink: The Power of Thinking Without Thinking by Malcolm Gladwell, and Thinking, Fast and Slow by Daniel Kahneman.

So what does that have to do with the mission of testing? It is extremely important that testers understand and adhere to their mission, as to replace it (either willfully or unintentionally) would be directly fogging the headlights on your project. So should the mission of testing be to give confidence? I don’t believe it should. I would agree with my friend Michael Bolton, that making “confidence” your mission in testing is akin to goal displacement, or substituting objectives with those that suit your means as opposed to the end.

I believe the mission of testing is gaining information; but here are some better examples for your reference:

  • Testing is a process of technical investigation, intended to reveal quality-related information about a product (Cem Kaner)
  • Testing is questioning the product in order to evaluate it (James Bach)
  • Gathering information with the intention of informing a decision (Gerald Weinberg)

So what is the problem with making confidence the mission of testing? Shouldn’t we want to have confidence in our products? Isn’t it a good thing to have confidence in our testing? Of course we want confidence in our products and testing, but if you make gaining that confidence your mission, in my opinion, you are intentionally adding confusion to the decision-making process. Aside from trying to hit the bulls-eye on the wrong target, testing for confidence is a slippery slope to ill-informed decisions, misuse of metrics, and a ready candidate for confirmation bias.

Testers should be constantly vigilant against all forms of bias, but especially confirmation bias. Making confidence your mission guarantees you will be seeking information to give your stakeholders certainty – instead of information that should give them pause for thought. Every tester has at times been subject to the “Curse of Cassandra“, or giving a valid warning that is not heeded. But nothing will put you permanently in that place quicker than having things go wrong after you’ve not only provided information to stakeholders – but have made a value judgement on their behalf!

Some may view me as overly skeptical. That’s fine. But I would rather err on the side of caution (and humility) when seeking information for my stakeholders. It’s up to them to decide what to do with what I give them – objectively finding it is hard enough without attempting to gain credibility through inappropriate means. So when someone tells me someone has asked them to give them confidence through testing, my simple advice to them would be this: stick to the mission.

Improving the State of your Testing Team: Part Four – Attracting and Retaining Talent

The greatest challenge in building a team is finding good people. But as difficult as finding those people can be, keeping them motivated and in the building after you hire them is where the real work begins. Almost the entirety of our improvement program in the Global Test Center (GTC) is based on talent management. Metrics? Nope. Maturity models? Nope. Best practices? Nope.

They only way we are going to improve the state of testing here (or anywhere IMHO) is by focusing on hiring, training, and motivating the best testers in the industry. And the approach we’ve taken has three parts:

  1. Creating an environment of honesty and transparency
  2. Building a development structure focused on training, coaching and mentoring
  3. Transferring control and quality of work to the teams

A Case for Transparent Management

The greatest advice I ever got on hiring managers was from one of the best people I’ve ever had the pleasure to work for, and her motto was: “People don’t quit their company; they quit their manager”. There is a lot truth in that statement, and it echos a Forbes article published last year that boiled down all the “top 10” reasons why talented people leave companies into one:

“Top talent leave an organization when they’re badly managed and the organization is confusing and uninspiring.”

Want to run your best people off quickly and efficiently? Make them work for uninspiring leadership that treats people like children by not sharing information. In “The Fifth Discipline: The Art and Practice of the Learning Organization“, Peter Senge talks about building a learning organization through honesty and actively sharing information. I tell my teams everything I possibly (and legally) can so they aren’t confused about my thought process and as a result, buy into my decisions.

Don’t judge me on my access to information – measure me on the decisions I make with that information. That’s where my experience and skill as a manager comes in to play and differentiates me from my peers. Withholding information from your teams is a very standard practice for frightened or immature managers, and is incredibly damaging to the culture of your team. Transparency is probably the most important – and completely controllable aspect of your management style that will impact unwanted attrition.

Failure = Success

I may have a slight bias from watching him play basketball for the Chicago Bulls, but Michael Jordan’s perspective on failure encapsulates my approach to developing people: “I’ve failed over, and over, and over again in my life – and that is why I succeed.” Letting people fail means you are setting them up for success. Unfortunately, most of the training programs I’ve seen run on the premise that you can transfer knowledge to people through strictly explicit means. The problem is that most of what we need to learn (and specifically to try and fail) only comes from tacit knowledge, which by definition isn’t easily learned through reading and writing.*

We have structured our GTC University into three distinct areas: training, coaching, and mentoring. Training is all the stuff we need people to read and understand to do the basics of their jobs. That’s focused on functional knowledge, white papers and books on testing, videos, etc., all stuff they can digest in their own way and time. Coaching takes people through specific techniques and approaches and then lets them practice while we watch with an immediate feedback loop. Test Management Mentoring is our program of pairing “up and coming” test leads and managers with senior test managers they don’t report to and focuses on large, strategic testing problems.

GTC University

Without all three, and especially coaching and mentoring, you run the risk of a shallow development program that only delivers the lowest value knowledge acquisition. People need to try and fail in a safe environment so they can have confidence to suceed in real projects. I think this quote my mom left in my notebook when she dropped me off for my first day at college sums it up nicely:

“And if it be said, that continual success is a proof that a man wisely knows his powers, – it is only to be added, that, in that case, he knows them to be small.”H Melville

A Players Hire A Players – B Players Hire C Players

Weak managers will actively discourage autonomy to maintain control. Talented people in creative, intellectual activities (like testing software) HAVE to have a large amount of autonomy to be successful. In his book “Drive: The Surprising Truth About What Motivates Us”, Daniel Pink suggests a terrific technique for gauging how much autonomy the people in your team really have. Autonomy audits put your control issues somewhere on a scale between a “North Korean prison and Woodstock”. If you are not giving people control over their own work, you can only expect them to hire teams whose work they can control.

Once you’ve let go of control and really trust your teams, let them take responsibility for the quality of their culture. I encourage my teams to make it difficult to join them as I want them to set their OWN barriers for entry and not let just anyone into our club. And now that they truly own their work and environment, they hold each other to standards I could never enforce as a manager. Great people aren’t easy to find or grow, but I believe if you work in a transparent way, deliver all three elements of development, and give people ownership of their work, your chances of finding and retaining your A players are greatly increased.

*For more on that topic I HIGHLY recommend Michael Bolton’s writing and the excellent book he introduced to me by Harry Collins.

What Does it Take to Change the Software Testing Industry? Courage!

My fellow AST board member and resident “software anthropologist & software tester”, Pete Walen, recently posted about what he felt it took to make a difference or change the world. He rather pointedly asked the question, “When was the last time you were proud of the work you did?” Excellent question!

One of the values I talk about with my teams is integrity, and an example of how I see that being demonstrated in your testing should be a refusal to accept mediocrity. After some modest questioning from his readers about challenging processes and the fear of losing your job, Pete came back with this (emphasis mine):

“If that idea does not give you a certain period of pause, you might be independently wealthy,  have no responsibilities beyond yourself or, well, you just don’t care about the future. There may be something else at work.  There may be some ideas that I have not considered.  Or, maybe you just don’t care.

Even better! Changing the way you, your project, or your business conduct and value testing is hard work! And it takes more than brains – it takes guts. Software testing is loaded with unchallenged ideas from the last 20 or so years, and changing its perception works against ingrained bias and prejudice. Not only that, the industry is lousy with vendors and consultants who earn a tidy income from high volume, low margin (and low value) test factories. And trust me, successfully changing that environment is not about bringing your own lunch – its about eating someone elses!

So here’s why I think Pete’s post is so important. Jump eight thousand miles or so from snowy Grand Rapids, Michigan to sunny Pune, India. Now, I have travelled and worked in India for over 10 years, but something struck me about the conversations on my last trip – the tone. I had the distinct pleasure of participating in an AST round table discussion on testing skills with Pradeep Soundararajan CEO of Moolya, Justin Hunter CEO of Hexawise, Smita Mishra CEO of QA Zone and our hosts, Cognizant.

Keith Klain - AST India Roundtable

Talking testing with Hexawise, Cognizant, Moolya and QAZone

The discussion ranged from which testing skills are hot in the market to how best to use tools in a rapidly changing environment. But for me, the highlight of the night came when one of the over 200 people attending asked what it takes to improve testing in a company. The “Kung Fu Panda” from Moolya raised his hand and said “One thing – Courage!“. James Bach did a great write up in Tea Time with Testers about how Moolya are changing the way a software testing company runs in India (or the world for that matter), and after spending time with Pradeep and seeing loads of examples of their work, I know they take changing the software testing industry seriously.

And so should you. According to Mark Twain, courage is not the absence of fear – but the mastery of it. There are people working in software testing all over the globe who are questioning long standing ways of working – some for the first time. Get yourself energized and get involved. All it takes is a bit of self-reflection like the brand Pete Walen is selling, followed up by a healthy dose of action Moolya-style: courage!

Context-Driven Controversy? Meh…

“I don’t care to belong to a club that accepts people like me as members.” –  Groucho Marx

As a relative newcomer to the context-driven community, I read with great anticipation the latest post on the Context-Driven Testing website. Unfortunately, a pattern seems to be developing on that site where good ideas or topics for debate end up drowning in some bizarre public spectacle made up of innuendo, accusations and irony. Sapient testing? Fuzzing? Checking vs Testing? HVAT? How about: WTF? My view……who cares. All this leaves me feeling bored and frankly, a little sad.

I’m drawn to the CDT community because I’m a skeptic. I’m not sure about anything, let alone the best approach to software testing, so the CDT philosophy of rejecting “best practices” hits all the right notes with me. Even after knowing about the Association for Software Testing for years, I’ve only recently been active, as I was not convinced you could effect change in software testing through “community” action.

I’m also interested in the CDT world because it forces testers by principle to do something that in my opinion, other approaches to software testing do not: use their brain.

This one-sided debate about who owns CDT, whether people are censured, the existence of an anti-automation cabal – all of that smacks of a storm in a tea-cup. Is the CDT community under assault from a bunch of Luddite “consultants” looking for ways to leverage marketing materials? Huh? And are these same “consultants” censuring people while twisting their mustaches and hatching diabolical “manual testing only” plans? Seriously? When you get past all the smoke and rhetoric, I can tell you my experience has been exactly the opposite.

Want to know what Michael Bolton says about “checking vs testing”? Read this. Want to know if James Bach is “anti-automation? Ask him here. They’re not firing off missives filled with thinly veiled attacks. Their work is out there for criticism – and trust me, they welcome questions. We all know how I feel about scrutiny, and after two years of extensive training, and consulting work filled with long debate and personal discussions, I can assure you – neither James nor Michael are “anti-automation”, CDT religious nuts, or stifling feedback. What a bunch of nonsense.

But don’t take my or anyone else’s word for it – find out for yourself.

There are so many great things happening in the CDT community now – and they are happening on a global scale. Seeds that were sown years ago are starting to grow, and CDT is being adopted by larger organizations on a bigger scale and continues to gain acceptance into mainstream software testing philosophy. So let’s get on with the work at hand: questioning ourselves, rigorous debate, and building a vibrant testing community that improves the lives of software testers and meets the demands of our business.

“The people who get on in this world are the people who get up and look for the circumstances they want, and, if they can’t find them, make them.” – George Bernard Shaw

The Pursuit of Scrutiny

“It is rare for people to be asked the question which puts them squarely in front of themselves” – Arthur Miller, The Crucible

I love scrutiny. I love it so much that I try to constantly surround myself with people who challenge my views. Either by directly asking for critique or by putting my work up to public review, I seek honest perspectives on my work to continually improve. I need scrutiny in the same way crucibles are used in laboratories or for scientific purposes: they withstand high temperatures to remove impurities. Scrutiny burns off impurities in my ideas and actions. It clarifies them. And I set the pace and tone for that scrutiny through how I give feedback to others: straight and to the point.

And guess what drives my desire for scrutiny: insecurity. That probably sounds funny coming from me, and might even sound like a weakness, but I assure you it can be one of your greatest strengths if used properly. My insecurity drives my quest for excellence in myself and in turn, in my teams. I am not confident we are always doing the right thing. I am not convinced we are always pursuing the right strategy for tools, process, and people. And it is exactly because I am not 100 percent secure in all my decisions, that I don’t just “like” scrutiny, I NEED it! In the never-ending pursuit of excellence, scrutiny is my compass.

Far too often I have seen testers get crushed under the weight of their own insecurities. And coupled with a fear of failure, that can have a paralyzing effect on either a person or team. Face your fears! Put yourself out there! Let the heat of challenging views and rigorous debate clarify and sharpen your ideas. At best you will have strengthened not only your position, but also your character, and if you fail, it would be, in the words of Theodore Roosevelt, “while daring greatly, so that his place shall never be with those cold and timid souls who neither know victory nor defeat.”

I want my work to be excellent and able to withstand scrutiny, and if people aren’t willing to give it freely, I must wring it out of them. It’s my responsibility as a leader. I shout and bang my fist on my desk because I demand excellence from myself and my teams. I don’t know if we’ll ever get there, but through our pursuit, some awesome things are starting to happen. If testing is questioning a product in order to evaluate it, it is my opinion that questioning must start its focus on the questioner.

Recently, I have witnessed or been involved in several discussions and Twitter threads that seem to be equating challenging an idea with attacking the person who proposed the idea. I reject that premise as no idea, person or thing defines me – so it is impossible to offend me. But even if they did offend me, or I felt it was a personal attack – so what! Force it through the crucible of your own scrutiny and all the imperfections will burn away. Professional testers honing their skills should never shy away from challenges to their ideas – they should not just welcome, but court them!

I leave you these words to inspire you to seek your “crucible”:

“It is not the critic who counts; not the man who points out how the strong man stumbles, or where the doer of deeds could have done them better. The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood; who strives valiantly; who errs, who comes short again and again, because there is no effort without error and shortcoming; but who does actually strive to do the deeds; who knows great enthusiasms, the great devotions; who spends himself in a worthy cause; who at the best knows in the end the triumph of high achievement, and who at the worst, if he fails, at least fails while daring greatly, so that his place shall never be with those cold and timid souls who neither know victory nor defeat.”

Theodore Roosevelt, Excerpt from the speech “Citizenship In A Republic” delivered at the Sorbonne, in Paris, France on 23 April, 1910

Exposing and Erasing Organizational Bias: An Interview with Keith Klain

In this very informative and revealing interview, Keith Klain discusses where biases among testing teams originated from, and who’s to blame for its negative, lingering effects to projects of all shapes and sizes. We learned that testers don’t have themselves to blame exclusively, but some serious self-reflection is definitely in order.

Noel: You’ve mentioned the need to overcome “organizational bias towards software testing.” Where did this bias originate, and do you see trends that lead you to believe it’s decreasing or increasing in size?

Keith: Organizational bias towards testing originates from lots of different sources, but it is primarily driven by the culture of the team. Collective behaviors make up our “corporate culture” and drive what we value as an organization and through patterns you can identify how those values are articulated. Decades old attitudes about the value and role of testing and testers (coupled with how we act ourselves) only reinforces those views. I also lay a good amount of blame at the testing industry itself for not taking a stronger position to some of the themes over the last 15 years that haven’t been particularly helpful to a craftsman approach to software testing.

Noel: You’ve also mentioned that testers themselves can be partially to blame for this bias’ existence – what have testing teams done to allow this bias to continue, and what can they do to help eliminate it?

Keith : If people are ignoring the information being produced by the testing team, in my opinion – that’s the test teams fault. Testing produces some of the most vital information to make business decisions about risk, release dates, and coverage – how can that information be ignored! Speak the language of your project to understand what “value” means to your business. When you align your testing strategy and reporting methods to those, I guarantee you will not be ignored. In our organization, the responsibility of ensuring testing gets the focus it deserves lies with the test team, and no one else.

Noel: Do you feel that there have been some biases that have been around so long that testers and developers alike just assume they’re part of the culture? How do teams crack through that pessimism to begin to repair the damages that biases have caused?

Keith: Repairing the damage to the actual or perceived value of your team begins with a healthy dose of self-reflection. Knowing what you contribute to that bias and taking responsibility for changing your immediate environment is the only way it starts to change. There is a view in psychology that we teach people how to treat us, and not accepting ingrained aspects of culture will at the very least, make your own life easier and possibly change things for the better. People disregard things they don’t value and testing is an incredibly valuable part of the operation, so not allowing yourself to be subjected to that behavior begins with being able to articulate that value.

Noel: Once these biases are removed, what kinds of benefits should teams see outside of a healthier working environment? What kind of potentially positive financial impact does the absence of bias create?

Keith: One of the biggest benefits is that the conversation changes. It moves away from the standard (and boring) topics of quantifying your work, counting test cases, metrics, etc., to more meaningful ones like risk, quality, and business strategy. Testing teams often impose artificial limits on themselves and their relationship to the business they support, so when you remove those barriers their self confidence improves almost immediately. As well, we’ve seen the amount of extra work around training, coaching, and community support increase tremendously as teams are connecting with each other and sharing stories.

Noel: You’ve led the worldwide project, the Barclays Global Test Centre, to recruit and grow “highly motivated” testers. Do you look at this more as a level of motivation to succeed on a personal level, or to maintain, or even evolve the state of software testing today?

Keith: Our first and foremost responsibility is to provide great information through excellent software testing to allow Barclays to make informed decisions about their business. That’s the impetus for the change program in testing and our primary objective. I do believe we are having a positive impact on the state of testing outside of our direct control and as well, my teams know I have no less a goal for them than changing the software testing industry for the better! People get inspired when they feel they are making an impact and that’s a big part of improving how your team is valued and inspired people can do amazing things. As far as personal success, the test teams deserve all the credit for anything we’ve done as they do all the work!

About the Author

A resident copywriter and editor for TechWell, SQE, and StickyMinds.com, Noel Wurst has written for numerous blogs, websites, newspapers, and magazines. Noel has presented educational conference sessions for those looking to become better writers. In his spare time, he can be found spending time with his wife and two sons—and tending to the food on his Big Green Egg. Noel eagerly looks forward to technology’s future, while refusing to let go of the relics of the past.

Improving the State of your Testing Team: Part Three – Strategic Objectives

Whenever we start a new testing effort, one of the first activities is to define the mission. Why are we testing? Who are our clients? What information are we trying to find? Knowing your mission is an important part to successfully meeting your projects objectives and the driver for what you produce during the life of the project.

From an organizational perspective, it is my opinion that it is equally important to define the mission for your testing group. Laying out the high level objectives for your team will give them a lens to view their work and prioritize that which moves them closer to the goal. Driving congruent action in large or small teams, regardless of their location or distribution  (or methodology) requires common themes people can personalize and manage themselves.

It is also imperative that your test teams strategic objectives are aligned to your company goals. I know that sounds obvious, but I don’t run into many test teams that actually define their OWN objectives, let alone know and align to their business. The benefit to alignment of your mission, is that your team can now articulate how your testing effort is helping to contribute to the company’s progress. Want to increase the value of your test team? Give solid evidence of how it’s helping meet business goals.

The following are examples of objectives I give our test teams and are a guide for how I want the testers to judge the effectiveness of their work:

  • Manage risk by continually assessing and reporting on product quality

  • Decrease costs by increasing test efficiency

  • Deliver a best in class global testing service that uses industry leading techniques

  • Improve utilization of technology and tools through reuse and collaboration

Each of those link directly to an IT objective, which are in turn linked directly to the business. They are also worded specifically so that they don’t prescribe what people should do – but how they should continually question their work. Is what I am doing efficient? How does this activity help us get information about quality? Does my work make use of all available resources? How does what I am doing benchmark to whats excellent work in the industry?

There are significant challenges in keeping people moving in the same direction and testing objectives can bring an additional set of problems. Once you think you’ve got a good set of business aligned goals, there is a huge hurdle I’ve seen you should be prepared to address: numbers.

In my experience, if you want to derail your test improvement effort as quickly as possible, introduce a maturity model or metrics program. It sounds counterintuitive, but assigning number based targets to your goals almost ensures they will interfere with their achievement. In his book, “Measuring and Managing Performance in Organizations“, Robert Austin talks about measurement dysfunction and its consequences. That book was written over FIFTEEN years ago, but the testing industry is still rife with consultants and companies selling this stuff to your COO.

Trying to measure quality and testing through strictly quantitative measures flows directly from the false analogy of software testing to manufacturing. That’s a direct route to low value testing and commoditization. Unfortunately, your test team will probably be the ones bringing the metrics to you, because they feel its a concrete way to demonstrate their value. (and some testers just love to count their test cases!) Don’t take the bait! You’ll be setting up an environment where people are valued based on their perceived productivity, and the next thing you know you’ll be talking about unit pricing!

Set business aligned objectives that can be used to guide your efforts and you’ll get long term, sustainable improvement can be tied directly to value. Good luck!

Improving the State of your Testing Team: Part Two – Principles

“But every difference of opinion is not a difference of principle.”Thomas Jefferson

To inspire intelligent, thinking people to work together to solve large, organizational problems is a tremendous challenge. Creative people should not be constrained by process or managerial constructs that don’t add any value to their work.

I truly believe, as Daniel Pink illustrates in his book “Drive: The Surprising Truth About What Motivates Us”, that getting the best performance out of people requires heavy doses of autonomy, and truly getting out-of-the-way.

But what if you need to change the way a team operates? What do you do if you need to change the perception of your testing team across an IT organization with thousands of people, spread across multiple time zones and continents? How do you get people to not just understand your goals, but more importantly, realign their behavior to meet those challenges together?

As in my earlier post in this series, the only way I have succeeded in improving the performance and perception of testing, is to align the foundations of our work environment. Starting with the shared values that underpin our approach, I then outline the principles of managing testing that everyone – from senior test managers to test analysts, can pattern their behavior. The following principles are from our orientation that everyone in the my teams attends, and are expected to show in their work. They define how we measure and manage careers and are the anchor points for what we call “What we Expect out of You”.

1. People start ignoring testing when it is no longer relevant

If people are ignoring the information being produced by the testing team, in my opinion – that’s the test teams fault. Good testing produces some of the most vital information to make business decisions about risk, release dates, and coverage – how can that information be ignored! Speak the language of your project to understand what “value” means to your business. When you align your testing strategy and reporting methods to those, I guarantee you will not be ignored. In our organization, the responsibility of ensuring testing gets the focus it deserves lies with the test team, and no one else.

 2. Being responsible sometimes means rocking the boat

Software testing is the primary deconstructive process in a largely constructive activity. People who do analysis and development are going to be naturally biased towards confirming that something works and occasionally, you are going to have to tell them…wait for it…that it doesn’t! So what! I understand that testers want to be seen to be contributing to progress, but being a “service” to a project does not mean you are a “servant”. Critical thinking and challenging ideas to test them means you are going to rock the boat, and in fact, being “responsible” almost ensures it. Not everyone is going to like you…if you want a friend – buy a dog!

3. No one has the market cornered on good ideas

I’m pretty sure you don’t, but if you think your manager, or their manager, or the head of your company have all the good ideas – you’re wrong. You know where you’ll find loads of good ideas – all around you! Get to know the people next to you, on your team, on another product group – in the industry, so that you can learn from them and REUSE their ideas! Being efficient with project resources means discovering and using those resources no matter where they originate from. Get to know your peers and you can use the force multiplier of combined experience to tap in to all those great ideas lying around you.

4. Never stop asking why – question everything

The question “why” is the hammer in the tool box of the thinking tester. In “An Introduction to General Systems Thinking“, Jerry Weinberg states “As we work in less and less familiar situations, our inherited and learned perceptual capacities become less and less effective.” A great measure against the degradation of our perceptions is to continually clarify them through a rigorous course of “why”!

5. Invest 80% of your energy in your top 20%

The Pareto principle states that, for many events, roughly 80% of the effects come from 20% of the causes. I use this review my schedule to make sure I am spending my time on the right meetings and people. I also use this approach for managing teams. It cuts across the trend to reward everyone for participating, but I believe we should be spending our time on the people who contributing the most to achieving our goals. And those people are not always the management team.

It’s also a great tool to help manage your career. Finding out what is important to your company, what the leadership team values, and then aligning yourself towards those gives you a better chance of being in the right place at the right time. No one is going to advocate for your career better than you, so find out who is in that top 20% and chart your own path.

6. Leadership = Simplification

Due to various factors in the testing industry, the state of training and education for testers, and very often the project environments we find ourselves in, over complication can be a crutch employed to prove our value. Stop! The technology and projects are complicated enough, (and getting worse) and when you add all the people problems, complexity goes through the roof. Leadership = Simplification.

The ability to distill a complex set of ideas into simple expressions is an advanced skill, and sign of maturity. Your value as a tester is not measured in degrees of complexity in expression. Decisions are often made on less than complete and perfect information, and in my business, at a rate that requires quick and agile thinking. As a very senior manager once told me “Keith, you’re giving me the Ph.D version and I need it in Crayola!”

7. Don’t take it personally

Executing against any of these principles or in alignment to our values, is almost impossible if you are personally attached to your ideas. As a reminder – you are NOT your ideas. Your ideas are made up of multiple variables with diverse and complicated origins, that are then viewed through the often times, foggy lens of your immediate perception. All those factors work together to put a thought bubble above your head. Some days that bubble has a light bulb  in it…other days, a scribble. Change any one of those factors and you would get a different idea. If you are too attached to what is floating around in your head, you can’t take on new perspectives or view and more importantly – change your mind.

8. Think first – then do

Lastly, if we are living all are values and adhering to our principles, we will be self-reflective in our decision making process and extremely agile in their implementation. We will not be chained to personalities or bias and have the ability to change our mind without fear of failure or repercussion. All of this will allow us to get on and get things done for the test team and the organization.

In the next post, I will talk about how I think you should go about linking your testing team to your company’s strategic objectives. Thanks

Improving the State of your Testing Team: Part One – Values

As I run large testing teams for fairly large organisations and have done so for some time now, the questions I get asked most often are about how to improve testings position and how do I talk about testing with “senior management”.

Quite frankly, I’ve failed more than I’ve succeeded in meeting all my goals in improving testing for the companies or clients I’ve worked for over the last 18+ years, but my approach has evolved significantly to set myself (and my teams) up for a better chance at success.

Over the next couple of posts, I am going to map out what I think are the essential elements for motivating, driving innovation, and improving the perception (and reality) in testers in any organisation…enjoy!

Test Management Values

Typically, the first thing out of my test teams mouths when asked “how can we improve the state of testing here”, usually relates to something that OTHER people should do. Very few people or teams take an introspective based approach to improvement, or state their management values, but the ones that do, typically have great success. Ray Dailo, who runs the worlds largest hedge fund, Bridgewater Associates, has had tremendous success in attracting talent and has articulated a very self-reflective and personal vision for his team and how he expects them to run.

More senior management support, better appreciation of testing’s value, higher visibility into the organization – all the roads to those improvements start with YOU. And in my experience, the first step is defining your teams values. I talk a lot about values with testers, and after the look of confusion about why a testing team would talk about values, it usually starts to sink in.

I talk about values in testing, because they drive a lot of your behavior. Your values influence where and how you work, and with who and where you spend your time. Defining a testing “value system” is a great way to create common goals, align behavior, and lower the management overhead of your operation. In relation to the state of testing in your organisation, I find that defining your teams values in precise terms and being able to  articulate them with specific examples, is the first and most essential step to improving things.

The great thing about value aligned improvements, is that people can tie together stories they can propagate throughout the company. So with that, here are the values I outline for my teams during orientation and discuss on a regular basis .

Honesty

You’d think this would be an easy one, and it is for most of us – when it comes to being honest with other people. But I believe it is essential to be honest with yourself – and THEN with other people. If you are not getting the recognition you deserve, the right level of regard for your team, your testing is not respected – why is that? Is it because of everyone else…or is it because of you? When it comes to honesty, too often testers do not point their highly focused lenses of perception at themselves and what they are doing to add to their problems.

One of my favorite quotes is from French physiologist Claude Bernard who said “It is what we think we know already that often prevents us from learning.” I try to live honestly every day, with every interaction with my teams – whether it be about finances, compensation, strategy, or any aspect of my operation.

HR departments have criticized me in the past for being TOO honest with people! Aside from being the right thing to do, I find being radically honest with people actually helps keep attrition down even through bad economic cycles, as teams manage their own expectations better. Additionally, we spend a lot of time and energy recruiting intelligent people, so why when would we then treat them like children when it comes to managing our business!

Regardless, a healthy dose of self-reflection and being open about our strengths, and more importantly our weaknesses, is the only way to level set expectations and understand where to start. Nothing will undermine your efforts to improve more than tolerating dishonesty or deceiving yourself – remember, the only common denominator in all your dysfunctional relationships – is YOU!

Integrity

Now that we’ve been honest with ourselves about what is and is not working, and more importantly, what we are doing to contribute to the situation, the next value to address is integrity. Learning about our strengths and weaknesses does not give much benefit if we don’t change our behavior to show that knowledge. Integrity is a key aspect to changing the perception – and reality of how your testing team is treated.

I’ve found that having the integrity to find shortcomings and change your behavior to address them is a sign of maturity that people in “senior management” typically identify with and respect. It sounds counter-intuitive, but learning from your mistakes is what earns you the right to have an opinion, because as the saying goes – all opinions are not created equal.

Accountability

Finally, not making excuses for why things are (or are not) happening and taking full responsibility for getting things done is at the very least a path to happiness. If you are being honest with yourself, have identified what you need to do to change, but then do absolutely nothing about it – well then, you shouldn’t be surprised when your role is processed, packaged, commoditized and shipped somewhere to chase FX rates around the world.

Irish playwright and a co-founder of the London School of Economics, George Bernard Shaw said “The people who get on in this world are the people who get up and look for the circumstances they want, and, if they can’t find them, make them.” Being accountable for taking ownership of getting things done – upwards and downwards – is what people expect out of their leaders, and will move the test team higher up the value chain.

So that’s step one. Defining, discussing, and living your values. Not a lot of teams, testing teams or otherwise, actually take the time to do something as simple as write them down, ask big questions, and then figure out how YOU want your team perceived. Taking control of that will bring a congruence of action that people will notice and give you a starting place for other improvements.

Next up will be the test management principles that are underpinned by these values…thanks!

The skilled testing revolution…

Over the last couple of years, the rumors of software testing’s imminent demise have been widely reported – and I am happy to confirm that it’s true. There are testing zombies walking all over the planet right now that are starting to realize that their approach to their profession is either dead or on life support.

Their scripted and mindless approach to their jobs combined with shrinking profit margins for the giant test factories, have forced a massive “re-think” of how the deliver value to their projects and clients.

And regardless of what the “agilistas” would have you believe, there has never been a better time to be a software tester – albeit the right kind of software tester. I travel a lot and talk to hundreds of testers every year, and there is palpable energy connecting people from New Zealand to India, Singapore, Sweden, the Ukraine, Hong Kong – all over the globe. Testers are waking up from their certified torpor and seeing the value of skilled testing. Clients are starting to realize that the old ways of forcing metrics and measures into contrived “maturity” and quality models haven’t really improved anything. And vendors are starting to wake up to the fact that their business models need to evolve to meet the demands for real testing value from their clients.

The community of context driven testers can be a very small world and get caught up in assumptions and “bubble-based” thinking about the state of software testing. Unfortunately, the majority of software testers still work in highly scripted, manual software checking models, and only dream about the opportunity to use their brain and be rewarded for asking “why”. But those days are coming. During my most recent trip to India, I spent a lot of time with James Bach, and by the end of the week we were buzzing with all the energy being generated by the testers there – fearless, hungry to learn, and challenging the status quo.

The first shots in the skilled testing revolution have been fired. Hopefully this blog will play a small part in bringing the battle to the front…

Welcome to Quality Remarks

Welcome to Quality Remarks…after several requests and much thought, I have jumped into the world of software testing blogging…if you don’t know who I am, my name is Keith Klain and I have almost 20 years of experience managing enterprise-wide quality programs for financial services and global IT consulting firms. I love solving organizational problems through better software testing and am passionate about coaching and empowering testers. This is a collection of my thoughts, experiences and things I think are important to managing and improving software testing. Thanks – KK