Understanding the New EU Guidelines on General-Purpose AI Models: What Businesses Need to Know

As artificial intelligence rapidly advances, regulation inevitably follows. On July 18, 2025, the European Commission released substantial new Guidelines clarifying the responsibilities for providers of general-purpose AI models under the EU AI Act (Regulation 2024/1689). This comprehensive document profoundly affects not only tech firms but also numerous downstream companies integrating AI systems into their products and services.

Here’s a detailed breakdown of the key points from the Commission’s recent guidelines and how they impact businesses and providers across the AI value chain.

What Exactly is a General-Purpose AI Model?

Under the EU AI Act, a general-purpose AI model is defined as one capable of performing a wide range of distinct tasks, displaying significant generality, and trained with large datasets, often using self-supervised methods at scale.

The Commission’s new guidelines provide a practical test to identify such models, specifying that an AI model typically qualifies as a general-purpose model if:

  • Its training compute exceeds 10²³ FLOP (floating-point operations), approximately equivalent to a model with one billion parameters trained on extensive datasets.

  • It can competently generate outputs in the form of:

    • Text (including code)

    • Audio (including speech)

    • Text-to-image

    • Text-to-video

If these conditions are met, businesses must thoroughly understand and comply with specific obligations.

Recognizing AI Models with “Systemic Risk”

A subset of these general-purpose models carries what the Commission terms "systemic risk", meaning they potentially have widespread impacts on markets, public safety, fundamental rights, or democratic processes.

An AI model automatically qualifies as posing systemic risk if the cumulative compute used during training exceeds 10²⁵ FLOP. Such models are presumed to possess "high-impact capabilities", meaning their powers match or surpass today's most advanced systems.

The guidelines provide clarity by explaining obligations for providers in detail, which include:

  • Proactive notification: Providers must notify the Commission within two weeks of meeting (or expecting to meet) the systemic-risk compute threshold.

  • Continuous risk assessment and mitigation: Providers must adopt governance structures, rigorous cybersecurity safeguards, and incident-reporting mechanisms. Specifically, providers must ensure robust oversight through the entire lifecycle of their model, from initial large-scale pre-training runs through every subsequent modification or update.

Providers can challenge their classification as “systemic-risk,” but they carry the burden of proof. The guidelines outline specific evidentiary standards for successfully contesting this designation.

Who Exactly Counts as the AI Model’s Provider?

Determining who is legally accountable as the "provider" of an AI model is critical. According to the new guidelines:

  • Providers include those who develop or commission AI models for market release under their brand.

  • Downstream modifiers who significantly alter a model—using compute resources that exceed a third of the original training compute—also inherit provider responsibilities.

  • Providers established outside the EU must appoint an authorized representative within the EU before placing models onto the market.

Obligations Under the EU AI Act

All providers of general-purpose AI models (with or without systemic risk) must comply with strict transparency and accountability obligations:

  1. Detailed Technical Documentation: Providers must produce, maintain, and share documentation outlining the AI model’s technical characteristics, training/testing methodologies, and evaluation results.

  2. Copyright Compliance: Providers must have clear policies ensuring their training data respects copyright laws, particularly when using data scraped from online or proprietary sources.

  3. Public Summary of Training Data: Providers must publish a summary detailing the types of content used in training, allowing greater scrutiny and transparency.

  4. Continuous Updates: Documentation, policies, and summaries must be continuously updated across the model’s lifecycle, especially after significant modifications.

Special Provisions for Open-Source AI

Recognizing the societal and innovative benefits of open-source AI models, the EU provides certain exemptions. Models released under genuine free and open-source licenses may escape several transparency obligations—such as detailed technical documentation—if they:

  • Allow unrestricted, cost-free use, modification, and redistribution without monetization.

  • Publicly disclose model parameters, weights, architecture, and clear usage instructions.

  • Avoid indirect monetization (e.g., commercial support packages or ad-funded distribution platforms).

However, exemptions never apply to systemic-risk models, regardless of their open-source status.

Enforcement Timeline: How Soon Will This Affect Your Business?

Compliance obligations officially commence on August 2, 2025, with the European Artificial Intelligence Office (AI Office) overseeing enforcement. Yet, significant fines—up to 3% of global turnover or EUR 15 million—will only begin from August 2, 2026, allowing companies time for transition.

For existing models placed onto the market before August 2025, companies have until August 2027 to achieve full compliance, especially concerning documentation and data transparency.

The guidelines notably emphasize a collaborative enforcement approach by the AI Office, including early-stage engagement with providers and explicit encouragement of participation in approved Codes of Practice to ease compliance demonstration.

Strategic Recommendations for Businesses

  • Assess immediately if your AI models fall under the general-purpose or systemic-risk categories, based on the provided criteria.

  • Review and update your documentation and copyright compliance.

  • Consider proactively engaging the AI Office to clarify compliance strategies.

  • Evaluate benefits of joining an approved Code of Practice early to streamline compliance and limit liability.

The EU AI Act and its clarifying guidelines represent a landmark step toward comprehensive AI governance. Far from mere bureaucratic hurdles, these rules establish clear expectations around accountability, transparency, and public trust in AI.

Businesses that quickly adapt to this environment can secure a significant competitive advantage, demonstrating leadership in compliance and responsible AI deployment.

How Arifoglu & Partners Can Support Your Compliance Journey

Navigating this extensive new regulatory landscape requires sophisticated legal expertise. At Arifoglu & Partners Law Firm, our technology and regulatory specialists offer strategic guidance to ensure that your AI models, policies, and documentation stand up to regulatory scrutiny and position your business effectively for sustained compliance.

Contact us today to schedule your compliance consultation.

Next
Next

GDPR Simplification for SMEs and SMCs: Understanding the Latest EU Proposal