The AI Act Compliance Deadline is Approaching: Is Your Business Prepared?
The European Union’s AI Act, which introduces the world’s first comprehensive regulatory framework for artificial intelligence, entered into force on August 1, 2024. This legislation outlines a series of compliance obligations, with key deadlines rolling out through 2030. The first critical deadline arrives as early as February 2025, making it essential for businesses using AI systems to start preparing now to avoid non-compliance and the associated risks.
Immediate Action Required: February 2025 Deadline
The February 2, 2025 deadline is the first milestone under the AI Act. By this date, businesses must identify and stop using, developing, or distributing any AI systems classified as “unacceptable risk.” These prohibited systems include AI applications for social scoring, real-time biometric identification in public spaces, and others outlined in the Act. Failure to comply with these prohibitions could lead to severe legal and financial consequences.
Assess and Act Now
The AI Act requires all providers and deployers of AI systems to conduct a thorough assessment of their AI solutions. This step is crucial to avoid fines and reputational damage. Identifying which systems fall under the “unacceptable risk” and “high-risk” categories is the first step toward compliance. Early assessment allows businesses to adapt to upcoming obligations effectively and build a roadmap for compliance.
AI Literacy: Building Your Governance Framework
While the AI Act does not explicitly mandate AI literacy by February 2025, it emphasises its importance as part of a comprehensive AI governance framework. AI literacy is essential for understanding, managing, and mitigating the risks associated with AI deployment. Without adequate knowledge, deploying AI systems without oversight can lead to unintended consequences, potentially triggering compliance violations.
As more extensive requirements for high-risk AI systems come into effect in August 2026, building AI literacy among your staff becomes increasingly critical. For example, an HR team using AI for resume screening without a proper understanding of the implications could inadvertently fall into a high-risk category. Ensuring AI literacy within your team now can significantly reduce the risk of non-compliance and help prepare for future obligations.
Are You a “Deployer”? Yes, You Might Be
The AI Act broadly defines “deployers” as any organization using AI systems in a professional context within the EU market. This can include businesses in sectors ranging from healthcare to retail. Therefore, it is crucial for organisations to review their AI usage, determine their role as providers or deployers, and assess their compliance requirements under the Act. Ignoring these steps could lead to falling into high-risk categories unknowingly, with potentially severe legal consequences.
What Should Your Business Do Now?
- Assess Your AI Systems: Conduct an immediate review of your AI solutions to identify those that fall under the “unacceptable risk” category. Take steps to discontinue their use if they are classified as such.
- Ensure AI Literacy: Begin building AI literacy within your team as part of a robust governance framework. This will be crucial for meeting compliance requirements, especially those related to high-risk AI systems in the future.
- Prepare for Future Obligations: While the February 2025 deadline is the first, further obligations for high-risk systems roll out in August 2026 and beyond. Early preparation will position your business for long-term compliance.
- Consult with Legal Experts: Navigating the AI Act’s complexities requires professional guidance. Consulting AI law experts can help align your strategies with current and future regulatory requirements.
Need Assistance? Contact Us
360 Business Law specialises in helping businesses navigate the complexities of the AI Act. You can email us at legal@360businesslaw.com or call us on 0333 772 7736.