General Purpose AI Code of Practice. What is it and what does it mean for ASEAN firms?

The General Purpose AI Code of Practice, a long-anticipated voluntary framework, was formally published in July. Its structure is based on three distinct chapters: Transparency, Copyright, and Safety and Security. The framework offers providers the flexibility to voluntarily adhere to one, several, or all of these chapters.

As defined by the European Union (EU), a General Purpose AI (GPAI) model is a self-supervised model trained at scale on large datasets. These models are characterized by their capability to perform a wide variety of tasks, irrespective of their market positioning, and their ability to be integrated into various downstream applications and systems. The only specified exclusions are models utilized exclusively for research, development, and prototyping prior to their market availability. 

What is the General Purpose AI Code of Practice? 

The General Purpose AI Code of Practice (COP) is an instrument designed to facilitate industry compliance with the provisions of the EU AI Act pertaining to General Purpose AI (GPAI). Its primary value proposition is the mitigation of administrative burdens and the provision of enhanced legal certainty, thereby simplifying the compliance process for firms. At the moment of writing, 26 companies have fully adopted the GPAI Code of Practice.

By formally adhering to the Code, companies are subject to a monitoring regime based on their compliance with the Code’s established measures, a process which is less demanding than the rigorous reporting and demonstration required by Chapter V of the AI Act. Failure to adopt the COP may result in the imposition of onerous requirements for companies to provide detailed explanations of their methodologies, alongside requests for additional information and subjection to independent model evaluations.

The COP further specifies the technical criteria for GPAI qualification, centered on the model’s “training compute,” as defined by the AI Act’s “floating point operation greater than 10^25” threshold. This is elaborated upon in the annexes of the Guideline Document, which detail the estimation parameters (Annex A.1), the methodology for computation (Annex A.2), and provide illustrative examples of the criteria (Annex A.3).

In the event of non-compliance, the AI Act empowers the Commission with a range of enforcement actions, including the authority to request information, conduct official evaluations, mandate remedial measures from vendors (such as product recalls), and impose administrative fines of up to 3% of global annual turnover or EUR 15 million, whichever is greater.

The three chapters of the General Purpose AI Code of Practice are as follows.

Chapter 1: Transparency

Signatories are required to document and update all the information referred to in the Model Documentation Forum. They are encouraged to consider whether the documented information can be disclosed either in whole or in part to the public to promote public transparency. 

Chapter 2: Copyright

Pertains to data mining and training of the models. An emphasis on web-crawling and how it does not infringe on the rights of creators of such contents online. Chapter 2 commits signatories into developing appropriate and proportionate technical safeguards in protecting copyrights. The measure applies regardless if the signatory integrates the model into its own AI systems or if it is provided to another entity. 

Chapter 3: Safety and Security

Outlines concrete practices for managing systemic risks. It also serves as a blueprint for compliance with AI Act obligations for providers of GPAI models with systemic risks. It covers systemic risk identification, risk assessment, mitigation, and post-market monitoring.  

How this differs from ASEAN’s AI Guidelines

It is important to note that if AI services are offered in the European Union (EU), they are subject to the EU’s AI Act. Currently, COP is not expected to affect nor be relevant to ASEAN firms developing AI models due to the focus of ASEAN AI development on regional languages and geographies. Thus, not expected to meet the “systemic risk” criteria of the EU’s GPAI regulations. However, the large Southeast Asian diaspora within the EU represents a potentially lucrative market, making it prudent for these firms to be fully aware of the EU’s regulatory environment should they consider future expansion.

In contrast, ASEAN does not have a formal GPAI classification. The ASEAN Guide on AI Governance and Ethics provides general definitions of AI, deep learning, and machine learning, along with recommendations on, but not limited to, Accountability, Copyright, and Security. While there exist overlaps in recommended measures for transparency, the ASEAN framework demonstrably lacks the robust and structured reporting requirements mandated by its EU counterpart.

The fundamental disparity between the two blocs lies in their regulatory philosophies. The European AI Act is a binding regulation that establishes a uniform and solid framework for enforcement. The ASEAN Guide, by comparison, is a non-binding, broader, and more principle-based guideline. Its standards are more flexible and adaptable, as demonstrated by the additional guideline on Generative AI. This results in the ASEAN guides suggesting a diverse range of criteria and avenues for companies and governments to pursue, rather than imposing a single, standardized framework as seen in the EU.