Writing the Rulebook: Successful Legislative Move Paves the Way for AI Control

On the same day that Brussels filed a fresh antitrust action against Google, European lawmakers decided to enact legislation on artificial intelligence, making them the first in the world to do so. It is anticipated that the final version of the ‘AI Act’ will be approved before the year comes to a close.

Here’s what we know about this at the moment:

There will be new requirements for the transparency of generative AI. A solution proposed by the publishing industry that technology developers oppose as technically impossible is to post summaries of copyrighted works used to train the system. Developers of generative AI will need to install filters to prevent their systems from producing pirated materials.
To regulate artificial intelligence, the bill adopts a risk-based strategy that prioritizes software with the greatest potential to cause the most harm to humans. Some examples of this can be seen in the administration of the law, the provision of public services, and the management of essential utilities like water and electricity. Before releasing their solutions to the public, IT companies will need to undergo risk evaluations, similar to the FDA’s drug review process.
The policymakers voted to outlaw live facial recognition (albeit exemptions for national security and law enforcement purposes are debatable).
The most influential people in technology are attempting to steer the conversation in a certain direction. Over the course of the past few months, Sam Altman, the CEO of OpenAI (ChatGPT), has met with over one hundred legislators from the United States of America as well as politicians from other parts of the world, including the President of the European Commission. According to Altman, the EU’s proposed solution is extremely unlikely to ever be implemented.

  1. Despite common usage, the terms “regulation” and “control” may not always refer to the same thing. Crime, black markets, and the underground economy have repeatedly demonstrated that individuals and enterprises are skilled at evading and even outright disregarding legal constraints. Since AI’s core components—its algorithms and large databases—are fundamentally intellectual property, the same is likely to be true for the vast majority of applications of the technology. Who or what will be responsible for enforcing policies on regulatory infractions is an open topic.
  2. The European Union has flaws. However, it deserves credit because it is where many landmark legislation are enacted. While Europe has many more political parties than the United States does, they are often more constructive and productive than Congress. I believe there’s a good reason for this: unlike the United States, the European Union is not swayed by political money from technology corporations in the decision-making process. Every copyright owner and publisher will demand a reimbursement, shattering the AI business model, and making the new European law prohibitively expensive for AI enterprises. Overextending the AI’s costs and legal difficulties is a surefire method to drive it into a corner. However, there is still a significant regulatory hurdle to overcome: learning enough about the technology to foresee the limits that would be needed. The task ahead of those in charge of supervision is daunting. But I’m delighted they’re making an effort..
  3. The right amount of oversight without stifling creativity in AI development will be a major challenge. To rephrase, the question is how to ensure that European developers have enough flexibility and safety to design meaningful AI solutions.
Contact
Ваша корзина