Commission proposes to delay implementation of AI Act, seeks to streamline the EU’s digital legislation with “Digital Omnibus”. 

Written by Alyssa Almonte, Joses Wong

The European Commission has relented following pushback from big tech companies, to propose a delay in the implementation of key provisions of the Artificial Intelligence (AI) Act until December 2027 (instead of August 2026). The delay concerns provisions related to “high-risk” technologies, including the regulation of the usage of AI in reviewing job applications, evaluating creditworthiness, and reviewing school exams, AI systems used for biometric categorisation and AI used for emotion recognition. 

The AI Act is one of the first legal frameworks on AI to be implemented by countries, and came into force on August 2025 – read more here.  

The Commission cites the delay is to give EU Member States and companies sufficient time to adjust to the rules and implement the necessary support structures in compliance with the AI Act. The delay is part of the Commission’s “Digital Omnibus” package to strengthen the EU’s digital legislation. 

Under the Digital Omnibus, certain aspects of the GDPR would also be amended, which would potentially narrow the scope of the GDPR. The Commission intends to clarify the definition of personal data in Art. 4(1) GDPR by establishing a subjective or relative interpretation of the term, i.e., the assessment of data as personal data or anonymous/non-personal data would be based on the perspective of the relevant data controller. In addition, the Draft introduces a new exemption under Art. 9(2) GDPR to allow incidental processing of special category data in the development and use of AI systems and models subject to certain conditions.

These proposed changes recognise that controllers may process personal data for the purposes of AI training and operation on the basis of legitimate interests, meaning that tech companies such as Google, Meta, OpenAI would be allowed to use Europeans’ personal data to train their AI models. However, the proposal also requires controllers to take protective measures, providing non-binding examples such as data minimisation, data security, and giving data subjects the right to object. 

Whistleblower Tool

Separately, the EU AI Office has also launched the AI Act Whistleblower Tool to allow individuals to anonymously report potential breaches. The tool will provide a secure and confidential channel for individuals to report suspected breaches of the AI Act directly to the EU AI Office, the centre of AI expertise within the Commission.

Stakeholder Consultations

The Commission has also launched two stakeholder consultations under the EU AI Act. The first, which closes on 9 January 2026, relates to the copyright-related obligations for General Purpose AI (“GPAI”) providers under the AI Act and GPAI Code of Practice. The second, closing on 6 January 2026, seeks feedback on a draft implementing act to establish AI regulatory sandboxes under the AI Act. The links to the first and second consultations are here and here

While a majority of ASEAN companies may not be directly impacted by the AI Act, those that build or implement AI models should take note of these changes, particularly those active in the EU.