The EU AI Act: Financial Implications and Compliance Challenges
- Cytopus
- Feb 13
- 4 min read

With the rapid growth of technologies and implementation of Artificial Intelligence into business, the European Union has established a regulation “Artificial Intelligence Act” (AI Act), which stands out with particular security solutions related to AI usage. Even though it came into force on 1 August 2024, it will be effective after 2 years, from August 2, 2026 (except for the specific provisions listed in Article 113). This means that your business still has some time to take action and implement the AI Act Directive, to avoid further financial penalties and a low profile on the market.
Timeline of the AI Regulation

Still in draft form and yet to be considered by the European Parliament and the Council of the EU. No clear timeline for its adoption or enforcement.
July 12, 2024: The EU AI Act was published in the EU Official Journal. This marks it as the first comprehensive horizontal legal framework for the relation of AI across the EU.
August 1, 2024: The EU Act entered into force.
August 2, 2026: The EU AI Act will become effective, except for specific provisions listed in Article 113.
Council of Europe's Framework Convention on AI
September 5, 2024: The Framework on AI was signed by several countries. This will enter into force on the first day of the following three months if at least five signatories (including three Council of Europe Member States) approve it.
The Scope of Affected
The EU AI Act Directive applies to:
Any provider offering or deploying an AI system or general-purpose AI models within the EU market, regardless of whether the supplier is based in the EU or non-EU country.
Any developer of an AI system based in or within the EU
Any provider or deployer of an AI system based outside the EU, if the system's output is intended for use within the EU.
Moreover, it is important to mention that the AI Liability Directive governs non-contractual, fault-based civil law claims within the EU and applies to all sectors.
What are the Penalties?
Organizations found in violation could face restrictive measures, product withdrawals, and financial penalties, following the same approach as GDPR, including fines of up to €35 million or 7% of global annual turnover for prohibited AI practices, and up to €7.5 million or 1% of global annual turnover, whichever is higher, for providing false or incomplete information.
How to Implement it in Your Business?
Take Stock of Your AI System
Start by understanding where and how AI is being used within your business/organization. After, it is important to assess any systems currently in use, under development, or planned for procurement.
Categorize AI Systems by Risk
Afterwards, when you have successfully identified your AI systems, classify them based on the risk categories outlined in the EU AI Act:
Unacceptable Risk: These are banned outright. Examples include real-time biometric identification in public spaces, social scoring systems, and techniques that exploit vulnerable groups.
High Risk: These systems are allowed to come with strict requirements. They must first pass a conformity assessment before being deployed and be registered in a new EU database for high-risk systems. Secondly, includes robust oversight, risk management, logging, and cyber security solutions. For instance, such as systems used in critical infrastructure, hiring processes, credit scoring, and insurance claims processing.
Limited or Minimal Risk: These systems need transparency, such as informing users when they are interacting with AI, such as chatbots or deepfakes.
Get Ready for Compliance
On the front, organizations that are affected by the AI Act should take particular steps:
Assess Risks: Understand the risks your AI systems might pose.
Raise Awareness: Educate teams and conduct employee training to raise awareness regarding the new regulations and their implications.
Design Ethically: Build systems with compliance in mind from the outset.
Assign Responsibility: Clearly define who oversees AI compliance within your organization.
Stay Informed: Keep up with regulatory updates on the best practices.
Establish Governance: Develop a formal framework for managing the responsibility of AI systems.
How Cytopus Can Help?
Compliance Gap Analysis: Our experts will identify areas where AI systems or processes may not align with the AI Act's requirements, such as transparency, accountability, or data governance, and recommend solutions.
AI System Inventory Assessment: The Cytopus team will conduct a thorough review of your organization's AI systems to determine which fall under the scope of the AI Act and identify their classifications (to determine which fall under the scope of the AI Act and identify their classification (low-risk, high-risk, or prohibited)).
Employee Training and Awareness: At Cytopus, we provide tailored training programs to educate your teams about the implications of the AI Act and other compliances (e.g. DORA, GDPR), fostering a culture of compliance and responsible AI use.
Incident Response and Reporting: Our team of experts will develop sophisticated incident response plans to address potential AI-related risks and ensure timely reporting of non-compliance or adverse events as mandated by the AI Act.
Risk Management Framework Implication: Cytopus' experts will assist in establishing a robust risk management system tailored to AI technologies, ensuring proper monitoring, documentation, and mitigation strategies.