Artificial Intelligence & Machine Learning , Governance & Risk Management , Government

Rights Groups Call Out Shortcomings in EU Convention on AI

Critics Fear Exceptions for Private Sector, National Security Could Weaken Privacy
Rights Groups Call Out Shortcomings in EU Convention on AI
Image: Shutterstock

Privacy groups are urging European lawmakers involved in the finalization of the global treaty on artificial intelligence to tighten rules surrounding the use of AI by the private sector and governments.

See Also: OnDemand | How To Meet Your Zero Trust Goals Through Advanced Endpoint Strategies

The Council of Europe Framework Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law is set to be the world's first international treaty on AI governance. Proposed last year, the draft convention primarily seeks to ensure that AI systems uphold human rights and the rule of law.

Among its signatories will be the 27 European Union countries, the U.S., Japan and Canada.

As European lawmakers and other country representatives are set to meet for final negotiations on the treaty on March 11, 90 privacy rights groups, academics and activists have asked European lawmakers engaged in the negotiations to push back against efforts to omit private companies from the treaty's scope.

Under the current draft convention, AI system deployers must comply with several measures such as privacy, data processing and risk assessment requirements, though the treaty exempts AI systems used for research and development purposes from its scope. AI systems used for state "national security" purposes in foreign intelligence and counter-intelligence-related activities are also omitted from the scope of the convention.

In the letter the lawmakers, the signatories argued that such "blanket exceptionalism" will "weaken" the convention, by providing little "meaningful protection to individuals" who are increasingly being subjected to AI bias and manipulation.

"This would send a dangerous signal: The first international rulebook on AI could thus give corporations a free pass to develop and use AI according to their own interests," said Angela Müller, executive director of AlgorithmWatch and a signer of the letter. "The negotiating states must ensure that AI serves the interests of humanity and not those of a few big corporations."

The letter calls on EU lawmakers to "reject" exceptions granted to private companies and AI systems categorized under "national security."

Concerns of alleged lobbying by European states to preserve the interest of big tech companies also marred the European AI Act's final negotiation talks last December. France, Germany and Italy were among the nations that raised last-minute objections to any binding rule affecting general-purpose AI, arguing that it would affect European AI firms' abilities to compete with American tech giants such as OpenAI.

French AI startup Mistral AI was reportedly among the companies pushing to introduce a lighter regulatory regime, and rights groups later argued that European lawmakers had taken cues from the lobbying effort to introduce loopholes into the regulation.

In the wake of the Mistral AI and Microsoft partnership announced last week, many said European companies only use arguments of "European digital sovereignty" to coax the lawmakers to bend the rule for them (see: EU to Analyze Partnership Between Microsoft and Mistral AI).

About the Author

Akshaya Asokan

Akshaya Asokan

Senior Correspondent, ISMG

Asokan is a U.K.-based senior correspondent for Information Security Media Group's global news desk. She previously worked with IDG and other publications, reporting on developments in technology, minority rights and education.

Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing, you agree to our use of cookies.