Skip to main content
Digital Markets Act (DMA)
Statement22 May 2024Directorate-General for Competition, Directorate-General for Communications Networks, Content and Technology5 min read

High-Level Group for the Digital Markets Act Public Statement on Artificial Intelligence

Members of the DMA High-level Group

The pace of development and adoption of Artificial Intelligence (“AI”) systems, in particular since market releases in 2022, has demonstrated the potential and impact of AI technology, both as catalysts for innovation and growth, and as a challenge to a safe, fair, and contestable digital environment.

In view of the potential risks, as well as rapid evolution and deployment of AI, the question of AI regulation has changed significantly both in terms of objectives and priority, becoming a central policy issue in the EU and globally.

The AI Act approved by the European Parliament and the Council of the European Union is the first-ever comprehensive risk-based legislative framework on AI worldwide that will be applicable to the development, deployment and use of AI systems, including generative AI. It aims to address the risks to health, safety and fundamental rights posed by AI systems, while promoting innovation and the uptake of trustworthy AI. The AI Act will complement existing legislation as the use of AI technologies is already, today, subject to supervision from various perspectives under the competence of a number of supervisory authorities, including those which compose the High-Level Group for the Digital Markets Act (“HLG”). These supervisory authorities will remain competent to apply existing legislation which is not affected by the AI Act which provides for a full internal market based harmonisation.

The high-level group is composed of the Body of the European Regulators for Electronic Communications (BEREC), the European Data Protection Supervisor (EDPS) and European Data Protection Board, the European Competition Network (ECN), the Consumer Protection Cooperation Network (CPC Network), and the European Regulatory Group of Audiovisual Media Regulators (ERGA). The HLG is chaired by the Commission, which participates in its meetings and provides the secretariat in order to facilitate its work.  

Consumers face an ambivalent mix of benefits and risks that may result from the use of AI-tools, including bias, discrimination, fraud, manipulation and loss of control over personal data. This dichotomy also affects service providers. At the same time, the AI sector has remarkable opportunities for innovation which are yet to be realised. This creates a window of contestability that allows new players to emerge. It is important to keep up the pace of innovation and therefore making sure that AI markets are not distorted by harmful behaviours from incumbents and that related markets, such as cloud services and graphic processing unit markets, remain contestable.

Data is fundamental to developing performant AI systems. Personal data must be collected, aggregated, processed and used in ways that are lawful, transparent and fair, including when used for training AI systems, in full compliance with legislation protecting the right to the protection of personal data and other fundamental rights.

The deployment of AI technologies has the potential to intensify societal risks, such as the viral dissemination of deepfakes, where AI is used to create or manipulate synthetic content, and hallucinations, where AI provides false information, which are addressed by legislation such as the Digital Services Act and the forthcoming AI Act. The increasing adoption of AI technologies also has the potential to impact the contestability and fairness of digital markets in the context of the Digital Markets Act (“DMA”). For instance, data driven advantages and network effects which are leveraged by gatekeepers may be intensified, further entrenching existing gatekeepers, or leading to the emergence of new gatekeepers.

Gatekeepers of core platform services must comply with the obligations set out in the DMA, which aim to ensure fair and contestable digital markets, also when gatekeepers deploy AI in the context of their designated core platform services. Some gatekeepers have already been designated under the DMA in relation to various of their core platform services, many of which are integrating with AI systems. To the extent that such AI systems are embedded into designated core platform services, the DMA obligations apply, and compliance has to be assessed taking into account how AI systems determine the behaviours that are covered by the DMA provisions. Furthermore, the DMA regulates the type of personal and business data available to train or operate gatekeepers’ AI systems by requiring the consent of end users for certain types of data processing and by banning use of such data in competition with business users.

As members of the HLG, we are aware of the enormous societal benefits that AI has the potential to bring, providing solutions to economic, political and societal challenges. At the same time, the members of the HLG are aware of the risks posed by AI systems which need to be adequately monitored and mitigated through the appropriate regulatory frameworks, under the supervision of the competent authorities. We recognise the need for a coordinated enforcement. Therefore, we consider that a continued cross-regulatory exchange of views and experiences, as well as the exploration of the numerous challenges which are present across various regulatory frameworks, provide a firm foundation to develop a common understanding and coherent approaches to address them within the remits of the relevant legal frameworks.

The Commission and the bodies and networks composing the High-Level Group will continue working together to ensure that AI development in the EU happens in line with the law and the objectives of the DMA, so as to preserve contestability and the incentives to innovate in AI. In its work it will also take into account other applicable regulatory frameworks pertaining to digital services, AI, consumer protection, competition, data protection, digital, media, or telecom acquis.

For instance, cloud infrastructure and services, access to data (including training and testing and validation data), standardization and interoperability are key for AI advancement and merit particular focus. Similarly, among subjects of relevance for a cross-regulatory discussion are the degree of influence exerted by incumbent undertakings through cooperative agreements with innovative startups which will continue to be assessed. At the same time, risks related to fundamental rights including data protection and privacy, consumer protection, intellectual property rights, cybersecurity, disinformation in social media, and practices impacting mental health or contravening the safeguarding of minors will be among subjects of relevance for a cross-regulatory discussion.

As we take this work forward in the framework of the High-Level Group and in a dedicated sub-group, we will:
• follow developments in this critical area of policy, exploring the interactions between the DMA and other regulatory instruments;
• continue to exchange enforcement experience and regulatory expertise relevant for the implementation and enforcement of the DMA also with regard to AI;
• develop means to ensure effective cooperation, leading to a consistent regulatory approach across the DMA and other legal instruments.