And here . . . we . . . go!

In another failed chapter in the never ending book of “encouraging good behaviour”, the G7 have apparently agreed to a “code of conduct” for companies building #artificialintelligence systems. Per Reuters, “the voluntary code of conduct will set a landmark for how major countries govern AI, amid privacy concerns and security risks…”, now colour me cynical, but I think we’ve seen how this movie has played out before. You don’t need to expend any energy finding the billions being invested for new generative AI and other AI systems, which is only piled on the billions ALREADY spent on systems actively in use.

At least the EU is giving the appearance of pretending to govern the use of AI on the public unlike their US and Asian counterparts who “have taken a more hands-off approach than the bloc to boost economic growth.” Good grief!

You can read the code of conduct for yourself and I plan to take a closer look, but here are some thoughts at first pass.

The code “urges companies to take appropriate measures to identify, evaluate and mitigate risks across the AI lifecycle” which seems to all happen after the fact. “Organizations should use, as and when appropriate commensurate to the level of risk, AI systems as intended and monitor for vulnerabilities, incidents, emerging risks and misuse after deployment“.

If using the word “risk” is a drinking game, don’t drive after reading this: The risk management policies should be developed in accordance with a risk based approach and apply a risk management framework across the AI lifecycle as appropriate and relevant, to address the range of risks associated with AI systems, and policies should also be regularly updated. What. Does. That. Even. Mean…? Honestly…

Finally, the code seems to just trail off at the end for a subject that probably requires it’s own code: data quality and bias. “Organizations are encouraged to take appropriate measures to manage data quality, including training data and data collection, to mitigate against harmful biases.” IME we haven’t even come close to cleaning up or preventing bias in large data sets and seeing as some pretty big names decimated their #techethics teams I think all the encouraging in the world isn’t going to make much of a difference.

Once again, public governance of critical systems and emergent technology is woefully behind industry which is already over their skis. As we continue to “ride the insane horse towards the burning stable” of using AI, I think it’s well past the time the software testing and quality engineering community get its act together and quit fighting over how to make AI a better unskilled tester…

Hiroshima Process International Code of Conduct for Organizations Developing Advanced AI can be downloaded HERE

Leave a Reply

Your email address will not be published. Required fields are marked *