The European Commission is currently drafting the AI Act, which is set to regulate the use of artificial intelligence technology, including generative AI tools. This act is expected to be the world’s first comprehensive law governing AI technology, and it will require companies to disclose any copyrighted material used to develop their systems, such as ChatGPT and Midjourney.
Under the proposals, AI tools will be classified based on their perceived risk level, ranging from minimal through to limited, high, and unacceptable. High-risk tools will not be banned, but those using them will need to be highly transparent in their operations. This transparency requirement was a late addition to the proposals, according to sources familiar with the discussions.
While some committee members initially proposed banning the use of copyrighted material to train generative AI models altogether, this was abandoned in favor of a transparency requirement. Companies deploying generative AI tools, such as ChatGPT, will have to disclose any copyrighted material used to develop their systems.
The European Parliament agreed to push the draft through to the next stage, the trilogue, during which EU lawmakers and member states will thrash out the final details of the bill. Svenja Hahn, a European Parliament deputy, said that the proposed regulations found a solid compromise that would regulate AI proportionately, protect citizens’ rights, foster innovation, and boost the economy.
Macquarie analyst Fred Havemeyer said the EU’s proposal was “tactful” rather than a “ban first, and ask questions later” approach. The EU has been at the forefront of regulating AI technology, and Microsoft-backed OpenAI provoked awe and anxiety when it unveiled ChatGPT late last year.
The ensuing race among tech companies to bring generative AI products to market concerned some onlookers, with Twitter owner Elon Musk backing a proposal to halt the development of such systems for six months.
Shortly after signing the letter, the Financial Times reported Musk was planning to launch his own startup to rival OpenAI. The AI Act is expected to be a groundbreaking piece of legislation that will set the standard for how artificial intelligence is developed and deployed across the EU. The regulation will apply to all AI systems that are deployed or sold in the European Union, regardless of where they were developed.
The aim of the legislation is to protect consumers and ensure that AI systems are developed in a way that is safe, transparent, and accountable. This will be achieved through a series of regulations that will govern the development, deployment, and use of AI systems.
While the regulation of AI is a complex issue, it is clear that there is a need for regulation in this area. AI has the potential to transform many areas of our lives, but it also raises a number of ethical and societal issues that need to be addressed.
The EU’s AI Act is just one step towards addressing these issues, and it will be interesting to see how the legislation is received by the tech industry and the wider public.
With the growing interest in AI, it is likely that other countries will follow the EU’s lead and begin to develop their own regulations to govern the development and use of AI systems. This could lead to a more standardized approach to AI regulation across the globe, which would be a positive step toward ensuring that AI is developed and used in a safe and responsible way.