Current Status: March 2026 — including the Digital Omnibus and Germany’s KI-MIG
Parts of the EU AI Act have been legally binding since February 2, 2025 — and most businesses still don’t know it. On August 2, 2026, a much broader set of obligations kicks in. And the planned deadline extension via the so-called Digital Omnibus? It hasn’t been finalized yet. Companies that do nothing now risk fines in the millions, serious liability exposure, and a competitive disadvantage they could have avoided.
This guide breaks down exactly what the EU AI Act means for your business — with a step-by-step compliance checklist, a tool classification table, and everything SMEs need to know right now.
Does the EU AI Act Apply to Your Business?
Short answer: Yes — if you use or develop AI, Regulation (EU) 2024/1689 applies to you.
The longer answer depends on your role. The EU AI Act draws a key distinction between two types of actors:
Providers vs. Deployers — The Critical Difference
Providers are companies that develop and place AI systems on the market. They face the strictest obligations: technical documentation, conformity assessments, CE marking.
Deployers are companies that use AI systems within their own organization or for customers — the typical mid-sized business using ChatGPT, an HR tool with AI features, an AI bookkeeping solution, or a chatbot. Deployers have fewer obligations than providers, but they’re far from off the hook.
Most SMEs are deployers — and that’s exactly the perspective this guide focuses on.
Does It Apply to Companies with Fewer Than 50 Employees?
Yes. There’s no blanket exemption for small businesses. The EU AI Act contains only proportional relief on certain documentation requirements for micro-enterprises — but the core obligations like Article 4 (AI literacy), transparency requirements, and the prohibition of specific AI practices apply to everyone.
The Four Risk Categories — Where Does Your Business Stand?
The EU AI Act classifies AI systems into four risk tiers. Which tier applies determines the scope of your obligations.
Tier 1: Prohibited AI Practices (in effect since February 2, 2025)
These systems are completely banned in the EU:
- Social scoring by public authorities: evaluating people based on their social behavior
- Real-time biometric surveillance in public spaces (with narrow exceptions)
- Subliminal manipulation: systems that influence behavior without the person’s awareness
- Emotion recognition in the workplace or educational settings (with safety-related exceptions)
- Predictive policing: AI-based prediction of crimes based on personal characteristics
For most SMEs, these prohibitions are largely irrelevant — except emotion recognition in employee-facing contexts, which has started appearing in some HR tools.
Tier 2: High-Risk AI — Extensive Obligations Starting August 2026
High-risk AI systems require the most comprehensive measures: conformity assessments, technical documentation, registration in the EU database, a risk management system, and more.
The full list is in Annex III of the regulation. The most relevant categories for mid-sized businesses:
- HR decisions: AI used for applicant screening, performance evaluation, promotions, terminations → High-risk
- Credit decisions: AI-driven credit scoring or lending decisions → High-risk
- Access to education: AI selecting candidates for training and educational programs → High-risk
- Critical infrastructure: AI in electricity, water, or transport systems → High-risk
- Biometric identification: facial recognition for access control → High-risk
Tier 3: Limited Risk — Transparency Obligations Starting August 2026
Limited-risk systems must actively inform users that they’re interacting with an AI system (Article 50). This covers:
- Chatbots and virtual assistants in customer service → users must know: this is AI
- AI-generated content (images, videos, text) → must be labeled as AI-generated
- Deepfakes and synthetic media → labeling required
For every company running chatbots or publishing AI-generated content, this becomes mandatory in August 2026.
Tier 4: Minimal Risk — No Specific Action Required
The vast majority of AI applications fall here: AI filters in email tools, spell-check, recommendation engines in e-commerce tools, production optimization with AI, predictive maintenance.
But: Article 4 (AI literacy) still applies to minimal-risk AI — and you should still run an AI inventory regardless.
Tool Classification: Where Do Typical Business Tools Land?
| Tool / Application | Risk Tier | Your Obligations |
|---|---|---|
| ChatGPT / MS Copilot for internal drafts | Minimal | Article 4 training, AI inventory |
| Customer service chatbot | Limited | Transparency requirement (Art. 50) from Aug. 2026 |
| AI-generated marketing copy / images | Limited | Labeling required when published |
| Personio / HiBob with AI applicant ranking | High-risk | Full documentation, conformity assessment |
| AI credit scoring / risk assessment | High-risk | Full documentation, EU registration |
| Inventory optimization with AI | Minimal | Article 4 training, AI inventory |
| Predictive maintenance in manufacturing | Usually minimal | AI inventory, case-by-case review |
| AI-assisted accounting (e.g., DATEV with AI) | Minimal | Article 4 training, AI inventory |
| Emotion recognition in HR contexts | Prohibited | Discontinue immediately |
| AI phone bot for appointment scheduling | Limited | Transparency requirement (Art. 50) from Aug. 2026 |
| Spam filters, spell-check | Minimal | No specific action required |
Key Deadlines 2025–2028 — What Applies When?
What’s Been in Force Since February 2, 2025
- Prohibited AI practices (Article 5) are illegal — violations can be sanctioned with up to €35M or 7% of global annual revenue
- Article 4 (AI literacy) is active: all companies must ensure employees working with AI have sufficient competence — and document it
What Kicks In on August 2, 2026
- High-risk systems (Annex III): extensive deployer obligations (risk classification, documentation, transparency toward affected individuals, logging)
- Transparency requirements (Article 50): chatbots, AI phone assistants, and AI-generated content must be labeled as such
- Registration obligation for high-risk deployers in the EU database
The Digital Omnibus — Status as of March 2026
On March 13, 2026, the EU Council agreed on its negotiating position for the so-called Digital Omnibus package. Among other things, it proposes pushing back certain high-risk deadlines:
- High-risk under Annex III (HR, credit, education…): likely shifted to December 2027
- High-risk under Annex I (regulated products): likely shifted to August 2028
Critical caveat: This extension only takes effect if the Digital Omnibus package is formally adopted by the EU Parliament and Council before August 2, 2026. That’s not guaranteed. Until it is, the original deadlines remain in force.
Germany’s KI-MIG — The National Implementation Law
On February 11, 2026, Germany’s federal cabinet passed the KI-Marktüberwachungs- und Implementierungsgesetz (KI-MIG) — the national law implementing the EU AI Act in Germany. Key provisions:
- Designates the Bundesnetzagentur (Federal Network Agency) as the primary AI market surveillance authority
- Establishes national enforcement mechanisms and fine structures
- Creates the legal basis for AI regulatory sandboxes (Reallabore) for innovation projects
For businesses operating in Germany: the Bundesnetzagentur is your point of contact — and the authority that will conduct audits.
EU AI Act and GDPR — Your Prior Work Already Counts
If your company is already GDPR-compliant, you have a real head start. The overlap between the two frameworks is substantial:
| GDPR Foundation | EU AI Act Equivalent |
|---|---|
| Record of Processing Activities (RoPA) | Starting point for your AI inventory |
| Data Protection Impact Assessment (DPIA) | Precursor to the Fundamental Rights Impact Assessment (FRIA) for high-risk AI |
| Data Protection Officer (DPO) | Natural fit for AI compliance tasks or AI officer role |
| Data Processing Agreements (DPA) | Contractual basis for AI vendor relationships |
| Transparency obligations toward data subjects | Complemented by AI transparency requirements (Article 50) |
What’s genuinely new: The EU AI Act goes beyond data protection. It safeguards fundamental rights broadly — freedom from discrimination, fair decision-making, human safety. That requires new processes beyond the GDPR framework.
The SME Compliance Checklist — 7 Steps
Step 1: Run an AI Inventory (right now)
Create a complete list of all AI systems used in your company — including embedded AI features inside existing tools.
What your AI inventory should include:
| Field | Example |
|---|---|
| System name / tool | Personio, ChatGPT, website chatbot |
| Vendor | OpenAI, Personio GmbH, etc. |
| Use area | HR, marketing, customer service |
| Affected individuals | Applicants, customers, employees |
| Personal data processing | Yes / No, which data |
| AI feature active? | Yes / No / Unknown |
| Preliminary risk tier | Minimal / Limited / High-risk |
List every tool that uses AI in any way — even if you’re not sure. When in doubt: add it and clarify later.
Step 2: Determine the Risk Tier
Use the tool classification table above as your first reference. For unclear cases, apply this decision tree:
- Is the system on the prohibited list (Article 5)? → Discontinue immediately
- Does the system make decisions about people in sensitive domains (employment, credit, education, critical infrastructure)? → Check for high-risk (Annex III)
- Does the system interact directly with people or generate content? → Check for limited risk (transparency requirement)
- Everything else → Minimal risk
When in doubt: bring in legal counsel or a specialist consultant.
Step 3: Document Article 4 Training (right now)
Article 4 has been in force since February 2025. You must ensure that all employees using AI systems have the necessary competence — and document it.
What “sufficient AI literacy” means by role:
| Role | Minimum Competence |
|---|---|
| Executive / C-Suite | EU AI Act fundamentals, risk awareness, liability implications |
| IT / System Administrators | Technical risk classification, security and privacy considerations |
| HR | Prohibitions on emotion recognition, high-risk classification of HR tools |
| Marketing | Labeling obligations for AI content (Art. 50), deepfake rules |
| All AI tool users | AI basics: what is AI, how it works, what it can’t do |
For each training session, create a brief record: date, participants, topics covered. That’s enough to demonstrate compliance.
Step 4: Create an Internal AI Policy (by August 2026)
An internal AI policy defines which AI systems are approved, how they may be used, and who’s accountable. It protects you legally and creates clarity across your team.
Minimum content for an SME AI policy:
- Which AI tools are approved (whitelist)?
- What may not be done with AI tools (prohibited uses)?
- Handling of personal data in AI tools
- Labeling requirement for AI-generated content
- Responsibilities and points of contact
- Process for approving new AI tools
Step 5: Document and Assess High-Risk Systems (by August 2026)
If you’re deploying high-risk AI (e.g., AI-driven applicant ranking), as a deployer you must:
- Conduct a Fundamental Rights Impact Assessment (FRIA)
- Ensure human oversight over AI-driven decisions is in place
- Keep logs of AI-assisted decisions (when did which system make which recommendation?)
- Inform affected individuals that a high-risk AI system is being used
- Register the system in the EU AI database (as a deployer)
Step 6: Implement Transparency Requirements (by August 2026)
For chatbots, AI phone assistants, and AI-generated content, Article 50 applies from August 2026:
- Chatbots: Users must be clearly informed at the start of each interaction: “You are communicating with an AI system.”
- AI-generated images, videos, text: When published, these must be labeled as AI-generated
- AI voice systems on phone calls: Callers must be informed they’re speaking with an AI
Implement this now — don’t wait until the deadline is weeks away.
Step 7: Set Up Ongoing Monitoring and a Review Process
AI compliance isn’t a one-time project — it’s an ongoing process. Recommended cadence: annual review of your AI inventory and AI policy, quarterly check for new tools entering the organization.
EU AI Act Compliance Checklist for SMBs
7 Steps to Compliance — March 2026 · ki-agentur.com
Step 1 · AI Inventory (Do This Now)
- List every AI tool and system in use — including embedded AI features in existing software
- Document vendor, use case, and affected groups of people for each tool
- Verify whether the AI feature is actually active for each tool
- Assign a preliminary risk class: Minimal / Limited / High-Risk / Prohibited
Step 2 · Determine the Risk Class
- Check for prohibited practices (Art. 5): emotion recognition at work, social scoring — stop immediately
- Check for high-risk: does the system make decisions about people in HR, credit, education, or critical infrastructure?
- Limited risk: chatbots, AI voice systems, AI-generated content → transparency obligation (Art. 50)
- When in doubt: consult legal counsel or a specialized AI compliance advisor
Step 3 · Article 4 Training (Required Since Feb. 2025)
- Choose a training format: internal (workshop/presentation) or external (online course)
- Train all employees who use AI systems — role-specific (leadership, IT, HR, marketing, all users)
- Create brief training records: date, participants, content covered
- Schedule regular refreshers (annually recommended)
Step 4 · Create an Internal AI Policy (by Aug. 2026)
- Define an approved tools whitelist
- Clearly specify prohibited AI applications
- Set rules for handling personal data in AI tools
- Include transparency and labeling rules for AI-generated content
- Assign ownership and approval process for new AI tools
Step 5 · Document High-Risk Systems (by Aug. 2026)
- Conduct a Fundamental Rights Impact Assessment (FRIA)
- Ensure and document human oversight of AI decisions
- Set up decision logs (what did the system recommend and when?)
- Inform affected individuals that a high-risk AI system is in use
- Register the high-risk system in the EU AI database as a deployer
Step 6 · Implement Transparency Obligations (by Aug. 2026)
- Chatbots: display 'You are interacting with an AI system' at the start of every conversation
- AI-generated images, videos, and text: label as 'AI-generated' before publishing
- AI phone assistants: inform callers that they are speaking with an AI system
Step 7 · Ongoing Monitoring
- Annual review and update of your AI inventory
- Quarterly check for new AI tools entering the organization
- Keep your AI policy and training cycles up to date
- Monitor EU AI Act, Digital Omnibus, and national implementation law updates
What Happens If You Don’t Comply? (Realistic Assessment)
Fine Structure
The EU AI Act defines three penalty tiers:
| Violation | Maximum Fine |
|---|---|
| Prohibited AI practices (Article 5) | €35M or 7% of global annual revenue |
| Other violations (high-risk, transparency) | €15M or 3% of annual revenue |
| False or misleading statements to authorities | €7.5M or 1% of annual revenue |
For SMEs: fines are applied proportionately. The Bundesnetzagentur has indicated it will initially focus on guidance and corrective action — not immediate fines.
What an Audit by the Bundesnetzagentur Looks Like
The Bundesnetzagentur (Germany’s designated AI market surveillance authority under the KI-MIG) can conduct audits both proactively and reactively. In a typical audit, they’ll initially request:
- AI inventory / list of deployed AI systems
- Evidence of Article 4 training
- Internal AI policy
- For high-risk systems: technical documentation and FRIA
Companies that can produce these documents are in a significantly stronger position — regardless of whether everything is implemented perfectly.
Liability Beyond Fines
In addition to regulatory fines, the EU AI Act creates civil liability exposure. The EU AI Liability Directive (currently in development alongside the AI Act) aims to give affected individuals the right to claim damages from AI-related harm. Particularly relevant: HR decisions (rejecting applicants via AI), credit denials driven by AI, discriminatory AI outputs.
What Does Compliance Actually Cost? (Realistic Estimates)
A common concern for SMEs: compliance costs will outweigh the benefits. In most cases, that fear is unfounded — it depends heavily on which AI systems you’re running.
Scenario A: Minimal-risk only (e.g., ChatGPT and an AI accounting tool)
- AI inventory: 4–8 hours internally
- Article 4 training + documentation: 2–4 hours + optional external training (€200–500)
- AI policy setup: 4–8 hours (or template: €50–150)
- Total effort: one to two workdays + optionally €200–700
Scenario B: Customer service chatbot (limited risk)
- Same as Scenario A, plus: implementing the transparency notice in the chatbot (typically 1–2 hours of development)
- Total effort: two to three workdays
Scenario C: AI-driven applicant ranking (high-risk)
- AI inventory, training, AI policy: as above
- Fundamental Rights Impact Assessment: 20–40 hours internally or €3,000–8,000 externally
- Technical documentation (usually coordinated with vendor): 10–20 hours
- EU AI database registration: 2–4 hours
- Total effort: €5,000–15,000 — one-time, amortized over the system’s lifetime
Where Can Your SME Get Support?
You don’t have to figure this out alone. The following resources offer free or subsidized support:
- Bundesnetzagentur – AI Service Desk: Germany’s official first point of contact for EU AI Act questions (bundesnetzagentur.de)
- Mittelstand-Digital Centers: Nationwide network of consulting centers for digitalization and AI in mid-sized companies — free consulting and training (mittelstand-digital.de)
- IHK (Chambers of Commerce): Many local chambers offer free webinars and FAQs on the EU AI Act (ihk.de)
- Regulatory Sandboxes: The KI-MIG creates a legal basis for AI innovation sandboxes where companies can test new AI applications under regulatory supervision — particularly relevant for innovation-driven SMEs
EU AI Act Compliance for Your Business
We guide you from your first AI inventory through to full compliance — pragmatically, without unnecessary overhead, and with a clear focus on what matters for your company.
FAQ — The Most Important EU AI Act Questions for SMEs
The EU AI Act (Regulation (EU) 2024/1689) is the world's first comprehensive AI law. It applies to all companies that develop or deploy AI systems in the EU — including businesses using tools like ChatGPT, chatbots, HR software with AI features, or similar applications. As a deployer, you have concrete obligations: from running an AI inventory and documenting employee training to implementing transparency requirements.
Two things went live on February 2, 2025: First, prohibited AI practices (Article 5) are now illegal — including emotion recognition in the workplace and social scoring. Second, Article 4 is active: every company using AI must ensure that relevant employees have sufficient AI literacy and document it. This is being widely ignored.
On March 13, 2026, the EU Council agreed on its negotiating position for the Digital Omnibus, which proposes pushing back high-risk deadlines to December 2027 and August 2028 respectively. However, this extension only takes legal effect once the full package is adopted by the EU Parliament and Council — which hasn't happened yet. Until then: August 2, 2026 is the binding deadline. Plan accordingly.
No — ChatGPT as a general-purpose language model is minimal risk. High-risk status is determined by the specific use case: if you use ChatGPT to screen job applications and inform hiring decisions, that application context may be high-risk — because it's the use case (HR decision-making) that triggers high-risk classification, not the model itself.
The EU AI Act doesn't mandate an AI officer the way the GDPR mandates a Data Protection Officer. That said, it's strongly advisable to designate an internal point of contact for AI compliance. If you already have a DPO, that person can often take on AI compliance tasks as well — there's significant content overlap between the two roles.
The Bundesnetzagentur has signaled that its initial approach will focus on guidance and corrective action — not immediate fines. But that doesn't mean zero risk. Realistically: companies may receive a corrective notice with a compliance deadline, and non-compliance after that can trigger sanctions. Beyond regulatory risk, civil liability exposure grows — especially for companies running high-risk systems. Companies with documented action plans are in a far better position.
Starting August 2026, yes — for published AI-generated images, videos, audio, and text, Article 50 requires clear labeling. This applies to marketing materials, social media posts featuring AI-generated images, and synthetic media. Internal AI-generated documents (e.g., internal summaries) are not subject to the labeling requirement.
The KI-MIG (KI-Marktüberwachungs- und Implementierungsgesetz) is Germany's national implementing law for the EU AI Act, passed by the federal cabinet on February 11, 2026. It designates the Bundesnetzagentur as Germany's AI market surveillance authority, establishes domestic enforcement rules, and creates the legal foundation for AI regulatory sandboxes. For businesses in Germany: the Bundesnetzagentur is the authority you'll interact with if there's ever an audit.
The GDPR protects personal data. The EU AI Act protects fundamental rights more broadly — including freedom from discrimination, fair decision-making, and human oversight of consequential AI decisions. There's significant overlap (especially where AI processes personal data in HR or finance contexts), but the two frameworks have different protective goals. Being GDPR-compliant gives you a strong foundation, but you'll still need additional AI Act-specific measures.
An AI inventory is a structured register of all AI systems used in your organization — similar to a GDPR Record of Processing Activities, but scoped to AI. It captures: system name, vendor, use area, affected individuals, data processing, active AI features, and a preliminary risk classification. The fastest starting point: use your existing GDPR Record of Processing Activities and flag every tool that has AI features. You'll have your first draft in a few hours.
Bottom Line: Three Actions to Take Right Now
The EU AI Act is no longer a future concern. It’s partially in force already — and the next major wave hits on August 2, 2026. Here’s where to start:
- Run your AI inventory — What AI systems are you using? List them all, including embedded AI features in existing tools.
- Document Article 4 training — This has been required since February 2025. A brief training session plus a written record is enough to get started.
- Determine your risk tier — Use the classification table in this guide. For any high-risk systems, bring in expert guidance early.
Companies that start today have enough runway to be fully compliant by August 2026 — without last-minute stress and without paying premium rates for emergency consulting.
Sources
The legal references, regulatory statements, and data cited in this article are based on the following documents:
-
European Parliament & Council – Regulation (EU) 2024/1689 (EU AI Act), in force since August 1, 2024. eur-lex.europa.eu/legal-content/EN/TXT/?uri=OJ:L_202401689
-
Council of the EU – Council agrees position to streamline rules on artificial intelligence (Digital Omnibus), press release, March 13, 2026. consilium.europa.eu/en/press/press-releases/2026/03/13/…
-
German Federal Ministry for Digital and Transport – KI-Marktüberwachungs- und Implementierungsgesetz (KI-MIG), cabinet decision February 11, 2026.
-
Bundesnetzagentur – Information on AI system market surveillance. bundesnetzagentur.de
-
IHK Stuttgart – FAQ: AI Act / KI-Verordnung for businesses (2025/2026). ihk.de/stuttgart/…/faq-ai-act-ki-verordnung-6086082
-
European Commission – AI Act Service Desk – Guidance on Articles 4 and 50. ai-act-service-desk.ec.europa.eu
-
caralegal – AI Literacy under Art. 4 AI Act: What companies need to know now (2025). caralegal.eu/blog/ki-kompetenz-nach-art-4-ki-vo
-
activeMind.legal – GDPR and AI Act: Similarities and differences (2025). activemind.legal/de/guides/dsgvo-ai-act