File another one under “what could possibly go wrong”. From the article:
“Statewatch says data from people not convicted of any criminal offence will be used as part of the project, including personal information about self-harm and details relating to domestic abuse. Officials strongly deny this, insisting only data about people with at least one criminal conviction has been used.”

It will be interesting to watch the gymnastics to try to get the in compliance with the EU AI Act that comes into force in August. The act specifically deals with safety and potential harm through the risks of AI.
“The AI Act introduces a uniform framework across all EU countries, based on a forward-looking definition of AI and a risk-based approach:
Minimal risk: most AI systems such as spam filters and AI-enabled video games face no obligation under the AI Act, but companies can voluntarily adopt additional codes of conduct.
Specific transparency risk: systems like chatbots must clearly inform users that they are interacting with a machine, while certain AI-generated content must be labelled as such.
High risk: high-risk AI systems such as AI-based medical software or AI systems used for recruitment must comply with strict requirements, including risk-mitigation systems, high-quality of data sets, clear user information, human oversight, etc.
Unacceptable risk: for example, AI systems that allow “social scoring” by governments or companies are considered a clear threat to people’s fundamental rights and are therefore banned.”
In what world is this compliant or even remotely ok? We don’t have to accept this dystopian future being forced on us, but all the warnings are just being blown right past. Hope you’re happy AI fanboys, the technology community better wake up before it’s too late, but I’m beginning to feel it already is…
Source material from FOI requests by Statewatch
Discover more from Quality Remarks
Subscribe to get the latest posts sent to your email.