scrabble tiles
Photo by CQF-Avocat on Pexels.com

[Definitive Guide – August Edition] The EU AI Act and “GPAI Obligations”: What Has Begun and What Must Be Done to Comply (As of August 2025)

Key Points First (Inverted Pyramid Style)

  • Effective from August 2: The EU AI Act’s obligations for General-Purpose AI (GPAI) model providers are now in effect. New models must comply immediately, while existing models have until August 2, 2027. Enforcement by the AI Office begins August 2, 2026.
  • Basic Obligations (All GPAI): Include transparency (technical documentation), copyright respect (e.g., honoring TDM opt-outs), and publication of a “sufficiently detailed summary” of training data. A GPAI Code of Practice was released in July to support implementation.
  • Additional Obligations (For “Systemic Risk” GPAI): Include model evaluations, adversarial testing, risk mitigation planning, serious incident reporting, cybersecurity strengthening, and notification upon training reaching 10^25 FLOPs.
  • Supervision: EU Member States had until August 2 to designate national supervisory authorities. Enforcement will be handled by both national authorities and the AI Office in coordination.
  • Penalties: Up to 7% of global turnover / €35M for prohibited AI practices, and 3% / €15M for many GPAI-related obligations. Specific provisions for GPAI providers (Art. 101) are now defined.

1|What Started in August: A Snapshot of the GPAI Obligations

As of August 2, 2025, the EU AI Act’s obligations for GPAI model providers have entered the application phase. Any new GPAI models entering the EU market must immediately comply with requirements for technical documentation, copyright practices, and training data summaries. Existing models have a grace period until August 2, 2027. Full enforcement powers (e.g., corrective orders, fines) will be granted to the AI Office starting August 2, 2026.

To support this, the European Commission released the “GPAI Code of Practice”, which outlines best practices on transparency, copyright, and safety/security. Providers who voluntarily sign the Code benefit from reduced administrative burden and legal clarity. Non-signers can still comply by demonstrating alternative appropriate measures.

Editorial Note: Prohibited AI practices (e.g., social scoring) have already been banned since February 2025. The GPAI-specific rules now enter the execution phase.


2|Who Is a “GPAI Provider”? Clarifying the Definition

GPAI (General-Purpose AI) refers to foundation models designed for broad use cases. A provider is any entity that places or makes such a model available in the EU market. Downstream companies that fine-tune such models may also be considered providers if certain conditions are met.

Whether a model is deemed “systemic risk” is determined based on training compute thresholds (FLOPs) or designation by the Commission. The baseline threshold is defined as 10^25 FLOPs, and providers must notify authorities within 2 weeks of reaching it or expecting to.

Practical Tip:

  • Whether you qualify as a provider depends on contractual terms, distribution methods, and degree of modification. Downstream actors who exceed aggregated compute thresholds may also be classified as providers.

3|Basic Obligations (All GPAI): The “Big 3” You Must Start Now

3-1. Transparency (Technical Documentation)

Prepare Model Documentation detailing architecture, training methods, evaluations, limitations, etc. A template form is included in the Code of Practice.

3-2. Copyright Respect

Ensure compliance with EU TDM reservation rules and clarify your lawful data sourcing policy. Implement and document procedures for recognizing and excluding opted-out content.

3-3. Training Data Summary

Publish a “sufficiently detailed” summary of the training content. Finalized templates (July 2025) explain how to balance transparency with trade secret protection.

Implementation Hint:
Signing the Code of Practice provides a reference for acceptable disclosure levels, making it easier to align with regulators and reduce audit risk.


4|Additional Obligations for “Systemic Risk” GPAI Models

If a model is considered “systemic risk”, the following are also required:

  • Model Evaluation: Measure capabilities and limitations using emerging standardized protocols.
  • Adversarial Testing: Assess model resilience against misuse scenarios.
  • Risk Mitigation Plan: Document and update strategies for addressing systemic risks.
  • Serious Incident Reporting: Report issues promptly to the AI Office and national authority.
  • Cybersecurity: Strengthen model and inference infrastructure defenses.
  • Notification Obligation: Notify within 2 weeks if training reaches or is projected to reach 10^25 FLOPs.

Detailed examples are included in the July implementation guidelines and the Security chapter of the Code of Practice. Enforcement begins August 2026, including data requests, withdrawal orders, and fines.


5|“Day 30 Plan” for Providers: What to Do This Month

  1. Internal Structure: Define roles and responsibilities (legal, dev, security, comms). Appoint a compliance lead and deputy.
  2. Draft Technical Docs: Use the Code’s Model Documentation Form and identify missing data.
  3. Copyright Policy: Operationalize a source ledger, TDM opt-out detection flow, and exclusion rules.
  4. Training Data Summary: Prepare a public version using the official template and define masking policies for sensitive info.
  5. Risk Readiness (If Likely Applicable): Establish FLOPs audit trail, early warning system, and notification workflow.
  6. External Disclosure: Launch a Transparency Page with update dates, model names, and applicable articles.
  7. (Optional) Sign the Code: Consider a three-step rollout: training → signing → implementation for audit relief.

Template for Audit Metadata (Attach to AI Outputs):
Model / Version / Generation Time (UTC) / Applicable Article (Art. 53/55) / Code of Practice: Signed/Unsigned / Contact


6|What Should Deployers Do? Minimum Internal Policies

While GPAI rules mostly target providers, deployers (user companies) can benefit in EU business and procurement by doing the following:

  • Standardize footnotes for AI outputs: source, confidence level, model name.
  • Implement usage restriction policies: filters and escalation paths for prohibited AI use.
  • Ensure traceable logs: Record Model → Path (on-device/in-house cloud/external) → Time → Person Responsible.
  • Update contracts: Require providers to share training data summary URLs, copyright clauses, and incident reporting duties.
  • Promote AI literacy: Educate internal users based on the regulation’s timeline.

7|Supervision Framework & What Happens This Month

August 2 was also the deadline for Member States to designate national authorities (market surveillance / notified bodies). Going forward, national authorities will handle first-line supervision, with the AI Office leading cross-border coordination from August 2026. A central directory of designated agencies is now being published.

Small Caution:
Even though enforcement starts in 2026, the obligations began in August 2025. Failing to comply now may result in significant retroactive correction costs later.


8|Penalties: It’s Not Just the Fines—Which Article Matters Too

  • Prohibited AI (Art. 5): Up to €35M or 7% of turnover
  • General obligations (multiple articles): Up to €15M or 3%
  • False/missing information: Up to €7.5M or 1%
  • GPAI-specific provision (Art. 101): Up to €15M or 3% for violations or non-compliance

Practical Insight:
The first step to avoiding fines is a clear compliance registry and assigning departmental responsibility.


9|Samples: How to Write Your Public “Training Data Summary” and “Model Docs”

9-1. Training Data Summary (Sample)

  • Purpose: General-purpose language understanding/generation
  • Key Domains: News articles, encyclopedias, patent abstracts, open-source code, web forums
  • Sources: Public datasets (with versioning), web crawls (excluding TDM-reserved data), licensed sources
  • Exclusions: Opted-out content, minor-related data, sensitive categories
  • Quality Controls: De-duplication, toxicity/bias filters, language balance (with ratios)

Ensure alignment with templates; granularity must avoid revealing trade secrets.

9-2. Model Documentation (Excerpt)

  • Model / Version: GPAI-X v5.2
  • Training Compute: x.xx × 10^24 FLOPs (auditable)
  • Safety Evaluations: Red-teaming results (by category), detection/mitigation methods
  • Known Limitations: Weakness in long-form math reasoning; unsuitable for medical advice
  • Usage Warning: Only general info in high-risk domains; final decisions must involve humans

10|FAQ: Common Pitfalls and How to Avoid Them

Q1. Can we wait until the details (standards) are finalized?
A. No. The Code of Practice and Guidelines reflect the current expectations. Signing and gradual implementation is the low-risk path.

Q2. If we don’t reach 10^25 FLOPs, can we ignore the rules?
A. No. The basic obligations (transparency, copyright, summaries) apply to all GPAI. Only the additional obligations are threshold-based.

Q3. Can existing models wait until 2027?
A. Not recommended. Early publication of summaries and policies builds trust and improves procurement competitiveness.

Q4. Is signing the Code mandatory?
A. Optional, but it serves as a practical benchmark and reduces regulatory burden.


11|Who Benefits? Audience Breakdown and Key Takeaways

  • Executives / Business Leaders: Signing the Code and publishing a transparency page boosts scores in public, finance, and healthcare bids. Prepare KPI tracking (submission rates, audit findings) before 2026.
  • Legal / Compliance: Maintain a registry aligned with Art. 53/55/101, document TDM opt-out responses, and implement SOPs for reporting timelines.
  • Engineering / MLOps: Automate FLOPs auditing, red-team testing, and Model Doc generation in CI/CD to reduce overhead.
  • Comms / Sales: Clear explanations of training data summaries and usage guidelines help gain trust in client acquisition.
  • Public Sector / Education: Readily available, explainable public summaries make accountability to citizens and learners easier.

12|Editorial Summary: Minimum August Compliance = “Docs, Summary, Copyright”

  • Act Now: Ensure the 3 key itemsmodel documentation, data summary, and copyright policy—are in place this month. Signing the Code helps align with regulatory expectations.
  • Prepare for 2026: The AI Office’s enforcement begins August 2026. Focus on submission reproducibility and FLOPs auditing now.
  • 3-Year View: August 2027 is the hard deadline for existing models. Early transparency brings credibility and procurement advantages.

Sources (Primary / High-Reliability)

  • Effective Dates: GPAI obligations from Aug 2, 2025, AI Office enforcement Aug 2, 2026, legacy models must comply by Aug 2, 2027.
  • Code of Practice: Released in July, covering transparency, copyright, security. Signing reduces regulatory load.
  • Additional Obligations: Evaluation, adversarial testing, notifications, triggered by 10^25 FLOPs.
  • National Authorities: Deadline to designate was Aug 2, with oversight shared between national agencies and the AI Office.
  • Fines: 7%/€35M (prohibited AI), 3%/€15M (general GPAI), 1%/€7.5M (false reporting), Art.101-specific GPAI clauses.

By greeden

Leave a Reply

Your email address will not be published. Required fields are marked *

日本語が含まれない投稿は無視されますのでご注意ください。(スパム対策)