EU Privacy Under Fire from Big AI?

“European Commission accused of ‘massive rollback’ of digital protections”

Could be not great news for consumers and vulnerable communities if this goes ahead, from the article:

“The commission also confirmed the intention to delay the introduction of central parts of the AI Act, which came into force in August 2024 and does not yet fully apply to companies.

Companies making high-risk AI systems, namely those posing risks to health, safety or fundamental rights, such as those used in exam scoring or surgery, would get up to 18 months longer to comply with the rules.”

Industry is already so far out in front of regulation we need to STRENGTHEN these measures, not delay them further.

The EU AI act categorises “high risk” systems into two types:

1) AI systems that are used in products falling under the EU’s product safety legislation. This includes toys, aviation, cars, medical devices and lifts.

…and more worryingly:

2) AI systems falling into specific areas that will have to be registered in an EU database:

– Management and operation of critical infrastructure
– Education and vocational training
– Employment, worker management and access to self-employment
– Access to and enjoyment of essential private services and public services and benefits
– Law enforcement
– Migration, asylum and border control management
– Assistance in legal interpretation and application of the law.

We don’t need another 18 months to consider if this is a good idea and as well, in some cases the horse has already left the barn for these protections.

There is also a widely held reading of this as an attempt to rewrite privacy laws to grant exceptions for AI companies in order to encourage innovation through training their GenAI models.

We have to do a lot better than this to win and maintain public trust for artificial intellignce or big tech.


Discover more from Quality Remarks

Subscribe to get the latest posts sent to your email.

Leave a Reply