The State of AI in 2025: McKinsey Report

McKinsey just published their report on The state of AI in 2025: Agents, innovation, and transformation and aside from the usual overly optimistic responses around ROI and resource reduction, it had some interesting (and troubling) data on AI risk and mitigation.

The online survey of nearly 2000 respondents was conducted in June and July this year from what appears to be a fairly large sample size of companies, regions, and industries from all over the world.

As expected, most of the enterprise is still experimenting with AI and agents and haven’t really begun to scale their efforts. Going by most of the large organizations I’ve worked with, I suspect this is primarily due to the inefficiencies of any global operation and large scale duplication of efforts.

Another unsurprising part of the report is that most of the money spent on AI is being looked at as pure innovation investment and hasn’t returned anything to the bottom line yet in regards to EBIT. Most of the organizations have set some objectives for AI but it looks like we’re still in the FOMO phase for the bulk of the money being thrown at AI pilots in the enterprise.

Interestingly, as companies start to figure out how to implement all this new tech, it appears that we’re in for a new version of the slightly less successful “digital transformation”. Redesigning their business has to start with the underlying workflows, and taking that on the chin from an operational cost basis will be all of us as the excuse that “AI is making us more efficient so we can fire people” loses credibility.

Looks like layoffs are back on the menu!

More worrying from the report is the realization of AI related risks and the lack of mitigation strategies from heavy AI users. Per the report, McKinsey has “consistently found that few risks associated with the use of AI are mitigated” including risks around “personal and individual privacy, explainability, organizational reputation, and regulatory compliance”. In fact they found that those risks have only grown since their last version of the report in 2022.

Most troubling was that almost a third of the companies participating in the report stated they have already experienced negative consequences from problems with AI accuracy. This makes sense as it is the biggest area to test for risks as you deploy AI and especially generative models and try to integrate that output into your business processes.

As more AI gets deployed into the enterprise, my instinct is that explainability and regulatory issues are going to quickly jump up the charts for priorities as they can run directly into the legal system of you get them wrong. Regardless, looks like 2026 is going to be another big year for AI adoption and the quality engineering community better be ready!


Discover more from Quality Remarks

Subscribe to get the latest posts sent to your email.

Leave a Reply