EU lawmakers to have tiered approach to regulate generative AI
The EU parliament is working to tackle generative AI as they fix their negotiating position for legislative talks next month and a final consensus on the bloc's draft law is hoped to be reached by the end of the year.
Member of the EU Parliament (MEP) Dragos Tudorache, the co-rapporteur for the EU’s AI Act, told Tech Crunch, “This is the last thing still standing in the negotiation. As we speak, we are crossing the last ‘T’s and dotting the last ‘I’s. And sometime next week I’m hoping that we will actually close — which means that sometime in May we will vote.”
In recent months tech giant lobbyists have been pushing for regulating generative AI with big companies like Google and Microsoft.
Read China to test out 3D printing technology on moon to build habitats
Tudorache said that MEPs suggest moving towards a layered approach when drafting the laws, one to address responsibilities across the AI value chain, second to ensure foundational models get some guardrails and third to tackle specific content issues attached to generative models.
According to Todorache, "In order to comply [with the AI Act] it needs to explain how the model was trained. The accuracy of the data sets from biases [etc].”
He goes on to explain that the second layer for foundational models, using their power, versatility, and training needs to do certain things. “And it has to do with transparency, it has to do, again, with how they train, how they test prior to going on the market. So basically, what is the level of diligence the responsibility that they have as developers of these models?" he says.
For the third layer, Todorache explains copyright issues, “we’re not inventing a new regime for copyright because there is already copyright law out there. What we are saying… is there has to be documentation and transparency about the material that was used by the developer in the training of the model. So that afterwards the holders of those rights… can say hey, hold on, what you used my data, you use my songs, you used my scientific article — well, thank you very much that was protected by law, therefore, you owe me something — or no. For that, we will use the existing copyright laws. We’re not replacing that or doing that in the AI Act. We’re just bringing that inside.”
Adoption of the planned EU AI rulebook is still far off, but with the rapid pace AI is progressing, time is of the essence. The Commission’s original draft proposed to regulate AI by categorising them into risk bands.
Low-risk apps (which will consist of the bulk) will have no legal requirements but a handful of unacceptable risk use-cases would be prohibited. The middle and the third would have clear potential safety risks that would be deemed as manageable.
Read: TikTok cashing in on counterfeit cosmetics, prescription skin creams
“High-risk” categories where AI is being used in safety and human rights, such as law enforcement, justice, education, employment healthcare, etc; will have a regime of pre- and post-market compliance, along with a series of obligations. If requirements are breached, there would be potential enforcement and penalties.
According to the Commission, chatbots and deepfake don't fall under high-risk, and will therefore, would only have to comply with transparency requirements.
The EU AI Act won't be put into effect until 2025, with an additional risk that EU’s co-legislators might revise or amend the proposed draft. The MEPs main concern is to tackle and ensure that underlying generative AI models like OpenAI’s GPT aren't dodging regulation with claims that they have no purpose.
“We need to start actively reaching out towards other like-minded democracies [and others] because there needs to be a global conversation and a global, very serious reflection as to the role of this powerful technology in our societies, and how to craft some basic rules for the future,” urges Tudorache.