The US Mission to the EU has formally opposed the European Commission’s General-Purpose AI Code, a set of guidelines intended to ensure compliance with the EU AI Act. The opposition comes as Big Tech companies, including OpenAI, Google, Meta, Microsoft, and Amazon, lobbied the Commission to weaken the Code. A report by Corporate Europe Observatory and LobbyControl, released on Wednesday, details how these firms influenced the creation of the Code to their advantage. The investigation raises concerns over the fairness and transparency of the process, which critics argue has been shaped by powerful tech giants.
The Influence of Big Tech on the AI Code
A recent investigation found that Big Tech firms had a significant influence on the drafting of the European Commission’s General-Purpose AI Code, which aims to guide compliance with the EU AI Act. According to a joint report by Corporate Europe Observatory and LobbyControl, tech companies benefitted from structural advantages during the creation of the Code, directly impacting decisions that diluted provisions related to powerful AI systems.
The European Commission had appointed thirteen experts in September 2024 to lead the process of creating the Code. These experts conducted workshops and plenary sessions, gathering input from around 1,000 participants. However, tech companies were granted exclusive access to key decision-makers, while other stakeholders had limited opportunities to contribute.
Exclusive Workshops with Industry Leaders
One of the most notable concerns raised by the report is the preferential treatment given to major tech firms. The Commission held exclusive workshops with 15 leading AI model providers, including OpenAI, Google, Meta, Microsoft, and Amazon. These meetings provided the tech giants with early and direct access to the experts who were shaping the AI Code. This gave them a unique advantage in influencing the guidelines, ensuring that their interests were well-represented.
In contrast, civil society groups, publishers, and smaller businesses were largely sidelined in the process. Their feedback was limited to submitting comments via emojis on the platform SLIDO, a tool designed for online engagement. This stark contrast in the level of access and influence has raised concerns about fairness, transparency, and the overall integrity of the consultation process.
Intellectual Property Concerns and US Opposition
The draft version of the AI Code has also faced criticism from copyright holders and publishers. They have argued that certain provisions could conflict with existing intellectual property laws, potentially undermining protections for their content. This concern was echoed by the US Mission to the EU, which sent a formal letter to the Commission earlier this week, officially objecting to the proposed AI Code.
In the letter, US officials argued that the EU’s approach could stifle technological growth and innovation. The Trump administration had previously expressed similar concerns, with US officials urging the Commission to reconsider the scope and direction of its AI governance framework. The US Mission’s objection highlights the ongoing tensions between the EU’s regulatory approach and the interests of global tech companies.
Big Tech’s Growing Influence in AI Regulation
Bram Vranken, a researcher at CEO, criticized the Commission for prioritizing “simplification” and “competitiveness” in the drafting of the Code. He argued that these goals created an environment in which Big Tech companies could exert significant influence, effectively shaping the rules to their advantage. Vranken warned that the AI Code was just the first step in a broader push by the tech industry to deregulate AI technologies and minimize oversight.
The concerns raised by Vranken and other critics suggest that the European Commission may face increasing pressure to balance the interests of powerful tech firms with the need for robust regulation that protects public interest. As AI technology continues to evolve, finding this balance will be crucial for ensuring that the benefits of innovation are shared widely, while mitigating potential risks to privacy, security, and fairness.
Delay and Uncertainty Over AI Code Publication
The European Commission had initially planned to release the final version of the General-Purpose AI Code in early May 2025. However, as the criticism and lobbying efforts intensify, it remains unclear whether the Commission will meet this deadline. A spokesperson confirmed that both the guidelines and the final AI Code are now expected to be published by May or June 2025.
As the consultation process continues, stakeholders are eagerly awaiting the Commission’s final decision. The outcome of this process will have far-reaching implications for the future of AI regulation in Europe, shaping the way powerful AI systems are governed and held accountable.
The US Mission’s formal objection to the European Commission’s General-Purpose AI Code and the growing influence of Big Tech in the drafting process highlight the challenges of creating fair and effective AI regulations. As the consultation continues, the Commission faces mounting pressure to ensure that the final Code reflects a balanced approach that protects public interests while fostering innovation. With the publication of the final guidelines expected in the coming months, all eyes will be on how the Commission responds to these concerns and the broader debate over AI governance.