top of page
Brain Storming on Paper

Do You Have an AI Policy?

Every Business Needs AI Policies.

​AI ​Audits are coming. Documented decisions in your AI policies demonstrate responsible AI selection and use, crucial for compliance and building trustworthy, transparent, ethical AI solutions.

Gavel

EU AI Act Compliance

The European Union (EU) Artificial Intelligence (AI) Act is a legal framework that aims to regulate the use of AI systems within the EU, taking a proactive approach to AI governance.

Prepare Your Business for AI.

AI Safety Through Robust Policies

COMPLIANCE

    • In the event of an audit, documented decisions in your policy demonstrate responsible, transparent and trustworthy AI selection and use, which are crucial for compliance and building trust.

    • A policy isn't just words on paper - it is the foundation for AI literacy in your business. This makes good business sense. The policy supports your implementation and maximises AI investment, saving time and money.

  • RISK

    • Without a policy, assuming you know your AI risk category is risky. A policy forces you to classify and document, eliminating costly surprises and compliance headaches.

  • OPERATIONS

    • Your AI Policy maps your business strategy with your AI objectives. It also sets the benchmark for documenting AI use, safety, transparency and compliance measures for the EU AI Act.

EU AI Act Classification Based On Potential To Cause Harm.

Prohibited

Description

AI systems considered to pose an unacceptable risk are prohibited. These are systems that are deemed to be fundamentally incompatible with EU values and fundamental rights.

Examples

  • Real-time biometric identification in public spaces (with limited exceptions for law enforcement).  

  • Social scoring systems.   AI systems that manipulate human behavior or exploit vulnerabilities of individuals.  

  • AI systems used for indiscriminate surveillance.

High Risk

Description

AI systems classified as high-risk are subject to stringent requirements under the AI Act. These systems are used in sectors where the health, safety, or fundamental rights of EU citizens are significantly at stake.

 

Examples

  • Critical infrastructure (energy, transport, etc).  

  • Education and vocational training

  • Employment, worker management, and access to self-employment, etc).  

  • Essential public services (healthcare, justice, Banking etc.).  

  • Law enforcement. (Migration & border control management, etc.)  

Limited/Minimal Risk

Description

AI systems with limited or minimal risk are subject to fewer requirements than high-risk systems. These two categories primarily focus on transparency obligations but shift depending on the company's role as a provider, deployer, distributor, or importer.

Examples

  • AI systems like chatbots or AI-generated content where the main risk is a lack of transparency (i.e., not knowing you're interacting with an AI).

Classification Compliance Requirements.

Unacceptable Risk

These AI systems are banned outright. Anyone placing such a system on the market, putting it into service, or using it within the EU faces legal penalties.

High Risk

These AI systems will require implementing quality management systems, conformity assessment (often involving third-party assessment), data governance, technical documentation, and human oversight, to name a few requirements.

Limited/Minimal Risk

While the EU AI Act does not explicitly mandate specific checklists for limited—to minimal-risk AI systems. We draw guidance from ISO/IEC 42001 to ensure your business adopts AI safely and to ensure transparency and effective AI literacy training to achieve compliance.

AI Literacy

AI literacy under the EU AI Act necessitates that organisations establish governance frameworks based on roles and responsibilities to support AI literacy. This enables personnel to understand the opportunities and risks associated with AI systems, particularly high-risk ones, thereby ensuring responsible deployment and compliance.

EU AI Act Requires Product Safety and Fundamental Rights

The EU AI Act integrates fundamental rights and product safety to ensure that AI systems deployed within the EU are trustworthy, ethical, and safe for users, thereby mitigating potential harms and upholding European values.

Young Activist

Fundemental Rights

Specific testing, such as fundamental rights impact assessments, must be conducted before deployment to ensure AI systems align with fundamental rights. Furthermore, governance should be in place to actively promote fairness, transparency and safety for all individuals impacted by AI technologies.

Image by Jason Leung

Product Safety

All AI systems must be developed and deployed according to product safety standards to ensure they are technically robust and safe for their intended use. AI systems identified as high-risk products must adhere to mandatory conformity assessment procedures to demonstrate their safety before entering the market.

Writing Paper
Blank Notebook

BOOK YOUR FREE CONSULATION

Don't start on a blank page.

Get a head start and accelerate your AI adoption with a documented plan. When you schedule a call with us, we'll provide you with a complimentary AI policy and essential resources to streamline your path to AI compliance.

bottom of page