My talk at Test Automation Days on the “death of test engineering” was a harsh message, and I know a lot of people didn’t want to hear it but it’s happening whether we like it or not. The following comes from the guys “selling the sticks”, but this just published report by Anthropic “Labor market impacts of AI” seems to support my point that Test Engineers are under direct threat by AI.
As I pointed out in the talk, test engineering is a prime candidate for AI per the reports measures that “qualitatively captures several aspects of AI usage that we think are predictive of job impacts.” A job is more at risk to them if:
- Its tasks are theoretically possible with AI
- Its tasks see significant usage in the Anthropic Economic Index
- Its tasks are performed in work-related contexts
- It has a relatively higher share of automated use patterns or API implementation
- Its AI-impacted tasks make up a larger share of the overall role


I took some questions on what should test engineers do next. and that’s a fair point, so here’s my attempt at a roadmap on how to stay relevant in the age of AI in testing.
What to Read
Start with the core materials from my AI in Software Testing Starter Pack. These will help testers understand AI limitations, bias, governance, and the sociotechnical context of AI systems.
Core recommended reading from that list:
- The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want: Emily Bender & Alex Hanna
- Artifictional Intelligence: Against Humanity’s Surrender to Computers: Harry Collins
- On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?: Emily Bender, Timnit Gebru, Angelina McMillan-Major, Margaret Mitchell
- The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity: Shojaee et al.
- The Pursuit of Fairness in Artificial Intelligence Models: A Surve: Kheya, Bouadjenek, Aryal
- Against the Commodification of Education: Dagmar Monett & Gilbert Paquet
- Responsible AI: Implement an Ethical Approach in Your Organization: Olivia Gambelin
- Empire of AI: Karen Hao
- NIST AI Risk Management Framework
- Observability Engineering: Majors, Fong-Jones, Miranda
- Regulations: EUAI Act, PRA SS1/23, South Korea AI Basic Act
- Databricks has a wealth of reading in their Resources
What to Learn
To stay relevant in the AI testing era, focus on capabilities machines struggle with:
- Risk modeling and impact analysis
- Observability and production telemetry
- AI evaluation design (benchmarks, prompt testing, adversarial cases)
- Bias, fairness, and model drift detection
- Failure forensics and incident analysis
- Experimentation and hypothesis-driven testing
- Systems thinking across data, models, and infrastructure
And I cannot stress this enough, read Taking Testing Seriously and take a course in Rapid Software Testing! They are only people putting out training that is worthwhile in this current environment.
Who to Follow
Follow thinkers working at the intersection of testing, systems, and AI safety (mostly on LinkedIn). This list will expand your own list of people to follow just by reading what they post.
- Dagmar Monett
- Emily M. Bender
- Timnit Gebru
- Charity Majors
- Karen Hao
- Fiona Charles
- James Bach
- Michael Bolton
- Richard Bradshaw
- Maaike Brinkhof
- Alan Richardson
- Olivia Gambelin
- Alex Hanna, Ph.D.
The bottom line: don’t compete with AI at writing test scripts.
Use your expertise for understanding complex systems, identifying risk, and explaining failure.
That’s where the next generation of testers will create value.
Discover more from Quality Remarks
Subscribe to get the latest posts sent to your email.