A report by Corporate Europe Observatory (CEO) highlights the heavy involvement of big tech companies in drafting EU AI standards. According to the report, over half (55%) of the 143 members of the AI joint technical committee (JTC21) represent corporations or consultancies. The committee was established by the European standardisation bodies CEN and CENELEC.
Among the corporate representatives, nearly 25% come from US-based firms. These include four members each from Microsoft and IBM, two from Amazon, and at least three from Google. Meanwhile, civil society groups make up just 9% of the committee, raising concerns about inclusivity and balanced representation in shaping AI standards.
The AI Act, a pioneering regulation for AI using a risk-based approach, was approved in August and will gradually come into effect. To support the act, the European Commission tasked CEN-CENELEC and ETSI in May 2023 to develop harmonised standards. These standards will apply across industries, ensuring compliance with safety requirements for products ranging from toys to medical devices.
Concerns Over Private Influence on Public Policy
The CEO report criticises the European Commission for delegating crucial AI policy decisions to private organisations. “For the first time, standard setting is being used to address fundamental rights, fairness, trustworthiness, and bias,” said Bram Vranken, CEO researcher and campaigner.
Standard-setting bodies often prioritise processes over specific outcomes, which can complicate enforcement, said JTC21 Chair Sebastian Hallensleben. A CE mark, earned by following harmonised standards, may not guarantee that an AI system avoids bias or discrimination.
CEO also examined national standard-setting bodies in France, the UK, and the Netherlands. The report revealed that corporate interests dominate, representing 56%, 50%, and 58% of members in these countries, respectively.
Urgency to Accelerate AI Standards
The European Commission defended its approach, stating that CEN-CENELEC standards would undergo an assessment to ensure alignment with the AI Act. These standards will only be included in the Official Journal if they adequately address high-risk AI systems’ requirements. The Commission also noted safeguards, such as Member States and the European Parliament’s ability to object to proposed standards.
However, concerns remain over the slow pace of standardisation. A senior official from the Dutch privacy watchdog Autoriteit Persoonsgegevens (AP), the upcoming AI regulator, warned that the process must accelerate. “Standardisation processes typically take years. We need to move faster,” the official stated.
Jan Ellsberger, chair of ETSI, echoed this sentiment, noting that adopting standards could take months or years. “Standardisation is voluntary for the industry. Greater commitment from companies can speed up the process,” he told Euronews in August.
The report underscores the tension between corporate influence and the need for inclusive, balanced, and timely AI standardisation to ensure fairness and accountability in the rapidly evolving field of artificial intelligence.