What is the EU AI Act 2025 and how to comply

What is the EU AI act?
The AI Act is a risk-based approach, which classifies AI systems into four different risk categories:
Unacceptable risk: AI systems posing unacceptable risks to fundamental rights and Union values are prohibited (under Article 5 AI Act).
High risk: AI systems that pose high risks to health, safety, and fundamental rights. (These systems are classified as ‘high-risk’ by Article 6 AI Act in conjunction with Annexes I and III AI Act.)
Transparency risk: AI systems pose limited transparency risk. (subject to transparency obligations under Article 50 AI Act. )
Minimal to no risk: AI systems posing minimal to no risk are not regulated, but providers and deployers may voluntarily adhere to voluntary codes of conduct.
What is AI Act article 5:
- Manipulative Techniques:
- AI systems that use subliminal techniques beyond a person’s consciousness to manipulate behaviour in a way that significantly impairs their ability to make informed decisions, causing potential harm.
- Exploitation of Vulnerabilities:
- AI systems that exploit vulnerabilities of specific groups (e.g., based on age, disability, or social/economic situation) to distort behaviour, leading to significant harm.
- Social Scoring:
- AI systems that evaluate or classify individuals based on their social behaviour or personal traits, resulting in detrimental or unfair treatment in unrelated social contexts.
- Predictive Policing:
- AI systems are used to predict the likelihood of individuals committing crimes based solely on profiling or personality traits, except when supporting human assessments based on objective facts.
- Facial Recognition:
- AI systems that create facial recognition databases through untargeted scraping of images from the internet or CCTV footage.
- Emotion Recognition:
- AI systems that infer emotions in workplaces or educational institutions, except for medical or safety reasons.

What is Article 6 of the AI Act:
- High-Risk AI Systems:
- AI systems intended to be used as safety components of products, or as products themselves, covered by EU legislation listed in Annex II.
- AI systems listed in Annex III, which include specific use cases such as biometric identification, critical infrastructure management, education, employment, and law enforcement.
- Conditions for High-Risk Classification:
- The AI system must be intended to perform a task that is critical for the safety or fundamental rights of individuals.
- The AI system must pose a significant risk to health, safety, or fundamental rights if it fails to function correctly.
What is Article 50 of the AI act:
- User Awareness:
- Providers must ensure that users are informed when they are interacting with an AI system, unless it is obvious or the AI is used for legal purposes like crime detection.
- Synthetic Content:
- AI systems generating synthetic content (e.g., deepfakes) must mark their outputs as artificially generated.
- Emotion Recognition and Biometric Categorization:
- Providers must inform users when AI systems are used for emotion recognition or biometric categorization, except for legal purposes.
Prohibited AI Practices:
AI systems that manipulate, deceive, or exploit vulnerabilities.
AI systems are used for social scoring.
AI systems predict criminal behavior based solely on profiling.
AI systems create facial recognition databases through untargeted scraping.
AI systems infer emotions in workplaces and educational institutions, except for medical or safety reasons.
AI systems categorize individuals based on sensitive characteristics like race or political opinions.
Real-time remote biometric identification systems in public spaces for law enforcement, with some exceptions.
Business considerations:
Ethical use of AI:
Avoid practices that are manipulative, deceive, or exploit vulnerabilities. Avoid the use of AI for predicting criminal behavior. All businesses using High-risk AI must follow strict guidelines!
Transparency:
Be transparent about how AI is used in your workplace including AI systems used when affecting processes that affect employees. Inform employees about the presence of AI and its purpose. Companies using low/ limited risk AI such as a chatbot must disclose they are using AI.
Data privacy:
Protect the privacy of employee data.
Ensure compliance with data protection and GDPR.
Bias and training:
Ensure AI systems do not discriminate based on the characteristics of an individual.
Provide training and support for employees to help them adapt to AI (when applicable.)
Compliance with regulations:
Adhere to AI regulations and ensure assessments for high-risk AI systems.
Which industries will this effect the most?
Tech Companies (AI developers, software firms, SaaS providers)
Finance & Insurance (AI-driven risk assessments, fraud detection)
Healthcare & HR (AI used for recruitment, medical diagnostics)
Retail & Marketing (AI-driven decision-making affecting customers)
Legal & Compliance Teams (need to ensure AI tools used internally comply)

Tips for staying compliant with the AI Act:
Create a list of all use of AI in your workplace this could include:
Chatbots, virtual assistants, HR software using AI, Automated decision-making software, AI security tools, or AI data analytics.
Determine the level of risk with each type of AI used:
1. unaccepted AI (banned AI)
Examples: Social scoring, real-time biometric surveillance, or AI manipulation.
2. High-risk AI (strict compliance)
Examples: AI used in HR decisions, AI used in finance, AI used in medical diagnostics.
3. Limited risk AI (transparency)
Examples: Chatbots and customer support AI.
4. Minimal risk AI (no regulation)
Examples: Spam filters, spelling checkers, and Standard AI Automation.
After checking the level of AI used within your organisation, make sure that you comply with the requirements of each!
