AST CAST 2013 – The good, the bad, and the cheese curds…

image

My journey to Madison for the Association for Software Testing’s (AST) annual conference (CAST), can be summed up in two words: Paul Holland. Not only was I working with Paul the previous weeks at Per Scholas teaching the STEP class, but he was also the lead facilitator at CAST and little known to me, also my travel buddy. I found out that Paul was traveling on the same flight from NYC to Madison at the same time (7am on Saturday), but better than that, Paul swapped his seats to sit next to me so we could share in our sleep deprived state.

Now, ordinarily, as someone who travels a great deal for work, I rarely speak to anyone on a plane, as it is often the only time I get to read, catch up on videos, or just a moment of silence from my busy life. But if you know Paul, he’s a lot like me: once he gets going, he never stops talking! And we were both highly charged from the previous week together, so I feel really sorry for all the people sitting around us who probably learned more than they ever thought they would about the software testing industry. As this was my first CAST, I wasn’t sure what to expect, but if the trip to Madison was any measurement, I was sure it was going to be a corker. Here are my impression from my time in Madison: the good, the bad, and the cheese curds…enjoy!

image    image

The “bald eagles” of Testing                                   Testing talent at the Hilton

The good…

Let’s start with what was the real star of the conference: the people. I was honestly not ready for how many fantastic testers would be concentrated in one place, and if you like discussing (or arguing) about things, CAST was the place to be. My day (outside of the couple hours of work beforehand) started at around 9am and didn’t finish until after 1am every day. The time was filled with great conversations with extremely talented testers from all over the world and covered too many topics to list. If CAST is about putting the “confer” back into conferences, then they had this down in spades.

The next part of the conference I really enjoyed was the facilitated discussions. Highly unusual in my experience in software testing conferences, but now something I think is vital to learning at them and getting your money’s worth. Most conferences allow Q&A with the speakers if “time permits”, but in my experience, they are usually taken up with people wishing to make statements or are so off topic they are just a distraction. Some of the facilitators did better jobs than others, but when it worked well (which was most of the time) it added to the experience and guided the “open season” section to wring out all the value of the talk.

Another observation I had about CAST which stood out from other conferences I have attended is the number of women not only in attendance, but also participating as speakers. As someone who hires loads of testers, and feels we should be casting a large and diverse net for candidates and opportunities to enter the field, it was particularly encouraging to see so many talented women software testers in one place. Jean-Ann Harrison, Anne-Marie Charrett, Claire Moss, Dee Ann Pizzica, Anna Royzman, Julie Hurst, Alessandra Moreira, Jay Philips, Lou Perold, and Dawn Haynes are all great examples of excellence in testing for everyone in the field.

Speaking of Dawn Haynes, she absolutely killed her keynote on “Introspective Retrospectives: Lessons Learned and Re-Learned”. Honest. Authentic. Full of self-reflection. I was shocked to hear from her that it was her first talk she had given at CAST. It was so easy to connect with her stories and her style was so accessible, I found myself starting to analyze decisions I’ve made and relationships during her talk. You can watch the entire talk here

But the highlight for me was Erik Davis’s talk on “How to Find Good Testers in the Rust Belt”. Forget about probably one of the best presentations I’ve seen in a long time based on visual and technical merit alone. You maybe even gloss over the fact that Erik basically gave a master class in hiring testers ANYWHERE, let alone in the relative isolation of the mid-Cleveland market. But there was no denying, that his honest and funny communication of key ideas: candidate background risks and issues, casting a wide recruitment net, and LOADS of experiential advice on how to hire (and not hire) testers, was world class in its execution. Pay attention conference chairs: Erik Davis is keynote worthy and has the chops to headline a conference.

image    image

“Mr Friendly”                                                 Madison, WI

The bad…

So now for some disappointments from my five days in Madison, and to top the list would be despite my personal experience with great discussion – there weren’t enough of them! Specifically, I mean in during the “open season” portion of the talks which is supposed to be where we get up and ask questions of the speakers. I could only count a handful of times where I felt the speaker was being challenged or a contrarian view was being expressed. Some of the brightest minds in software testing were gathered together in one of the few forums to generate some light (or heat), which means we should be taking full advantage of the opportunity. As I tweeted then, “Hey Testers, if you are not getting engaged with the thought leaders at CAST2013  – you’re doing it wrong!”

All this leads to my next point, which is the large amount of confirmation bias in the discussions I had with speakers and attendees. I realize that there is a high likelihood of this occurring, as we are all self-identified “context-driven” testers, but I was holding out for a bit more controversy. Ranking on the ISTQB (guilty!), ranting about automation, and schools of testing were variations on a lot of the common themes through the days and nights activities. As we grow and mature as a community, I believe we should feel secure in our relationships and scrutinize more of the accepted truths of our world view.

 and the cheese curds…

Finally, as someone who grew up in hostile “Sconi” territory (Illinois), I have to say Madison was a great time with good food, good sites and good beer. My overall impression after my first CAST is pure mental exhaustion with too many ideas to plow through in too short a time. Being surrounded by a veritable “who’s who” of CDT experts was quite an experience, and I look forward to the next one – only with less cheese curds.

CAST

Exposing and Erasing Organizational Bias: An Interview with Keith Klain

In this very informative and revealing interview, Keith Klain discusses where biases among testing teams originated from, and who’s to blame for its negative, lingering effects to projects of all shapes and sizes. We learned that testers don’t have themselves to blame exclusively, but some serious self-reflection is definitely in order.

Noel: You’ve mentioned the need to overcome “organizational bias towards software testing.” Where did this bias originate, and do you see trends that lead you to believe it’s decreasing or increasing in size?

Keith: Organizational bias towards testing originates from lots of different sources, but it is primarily driven by the culture of the team. Collective behaviors make up our “corporate culture” and drive what we value as an organization and through patterns you can identify how those values are articulated. Decades old attitudes about the value and role of testing and testers (coupled with how we act ourselves) only reinforces those views. I also lay a good amount of blame at the testing industry itself for not taking a stronger position to some of the themes over the last 15 years that haven’t been particularly helpful to a craftsman approach to software testing.

Noel: You’ve also mentioned that testers themselves can be partially to blame for this bias’ existence – what have testing teams done to allow this bias to continue, and what can they do to help eliminate it?

Keith : If people are ignoring the information being produced by the testing team, in my opinion – that’s the test teams fault. Testing produces some of the most vital information to make business decisions about risk, release dates, and coverage – how can that information be ignored! Speak the language of your project to understand what “value” means to your business. When you align your testing strategy and reporting methods to those, I guarantee you will not be ignored. In our organization, the responsibility of ensuring testing gets the focus it deserves lies with the test team, and no one else.

Noel: Do you feel that there have been some biases that have been around so long that testers and developers alike just assume they’re part of the culture? How do teams crack through that pessimism to begin to repair the damages that biases have caused?

Keith: Repairing the damage to the actual or perceived value of your team begins with a healthy dose of self-reflection. Knowing what you contribute to that bias and taking responsibility for changing your immediate environment is the only way it starts to change. There is a view in psychology that we teach people how to treat us, and not accepting ingrained aspects of culture will at the very least, make your own life easier and possibly change things for the better. People disregard things they don’t value and testing is an incredibly valuable part of the operation, so not allowing yourself to be subjected to that behavior begins with being able to articulate that value.

Noel: Once these biases are removed, what kinds of benefits should teams see outside of a healthier working environment? What kind of potentially positive financial impact does the absence of bias create?

Keith: One of the biggest benefits is that the conversation changes. It moves away from the standard (and boring) topics of quantifying your work, counting test cases, metrics, etc., to more meaningful ones like risk, quality, and business strategy. Testing teams often impose artificial limits on themselves and their relationship to the business they support, so when you remove those barriers their self confidence improves almost immediately. As well, we’ve seen the amount of extra work around training, coaching, and community support increase tremendously as teams are connecting with each other and sharing stories.

Noel: You’ve led the worldwide project, the Barclays Global Test Centre, to recruit and grow “highly motivated” testers. Do you look at this more as a level of motivation to succeed on a personal level, or to maintain, or even evolve the state of software testing today?

Keith: Our first and foremost responsibility is to provide great information through excellent software testing to allow Barclays to make informed decisions about their business. That’s the impetus for the change program in testing and our primary objective. I do believe we are having a positive impact on the state of testing outside of our direct control and as well, my teams know I have no less a goal for them than changing the software testing industry for the better! People get inspired when they feel they are making an impact and that’s a big part of improving how your team is valued and inspired people can do amazing things. As far as personal success, the test teams deserve all the credit for anything we’ve done as they do all the work!

About the Author

A resident copywriter and editor for TechWell, SQE, and StickyMinds.com, Noel Wurst has written for numerous blogs, websites, newspapers, and magazines. Noel has presented educational conference sessions for those looking to become better writers. In his spare time, he can be found spending time with his wife and two sons—and tending to the food on his Big Green Egg. Noel eagerly looks forward to technology’s future, while refusing to let go of the relics of the past.