From EU AI Act readiness to ISO 42001 certification, NIST AI RMF alignment and ethical impact assessments — Dutient helps organisations deploy AI responsibly, at speed.
The regulations that govern AI are no longer hypothetical. They carry teeth, deadlines, and extraterritorial reach. Here's what your organisation needs to be across.
The first internationally recognised management system standard for AI. Provides a certifiable framework covering governance, risk, and responsible AI development and use across any organisation deploying AI systems. Structured to sit alongside ISO 27001 and ISO 9001 in an integrated management system.
CertifiableA voluntary but widely adopted framework from the US National Institute of Standards and Technology. Structured around four core functions — Govern, Map, Measure, Manage — it provides actionable guidance on AI risk at every lifecycle stage and is increasingly referenced in US federal procurement requirements.
Risk-basedEndorsed by 46 governments, the OECD AI Principles define five pillars of trustworthy AI: inclusive growth, human-centred values, transparency, security, and accountability. A foundational reference for enterprise governance design and a baseline test of regulatory alignment across jurisdictions.
Policy-alignedOrganisations with mature COBIT or ITIL environments can extend their existing governance structures to cover AI systems. These frameworks map AI-specific controls into established IT governance and service management disciplines, minimising the overhead of building AI governance from scratch.
Enterprise-readyThe world's first comprehensive AI regulation. The EU AI Act classifies AI systems into risk tiers and imposes rigorous requirements on high-risk applications — including conformity assessments, technical documentation, human oversight, and mandatory EU database registration. Penalties reach €35M or 7% of global annual turnover.
MandatoryGPAI models — including large language models used internally or embedded in products — face transparency obligations, copyright compliance requirements, and systemic risk evaluation when capability thresholds are exceeded. Any organisation using foundation models, including via third-party APIs, must understand these provisions.
GPAI complianceArticle 22 grants individuals the right not to be subject to solely automated decisions with significant legal or similarly significant effects. Organisations using AI in hiring, credit, insurance, or customer services must build explainability and human review mechanisms — or demonstrate a documented lawful basis for exemption.
Privacy-linkedEnacted in 2024, South Korea's AI Act establishes risk-based obligations for AI systems, introduces requirements for high-impact AI in critical sectors, and mandates notification rights when AI interacts with individuals. Organisations with South Korean operations or users must map their AI use cases against these provisions as enforcement ramps up.
Enacted 2024India is developing a principles-based AI regulatory framework aligned with its Digital India agenda. Early signals point to sector-specific AI rules in healthcare, finance, and critical infrastructure — and requirements for AI systems handling personal data under the Digital Personal Data Protection Act 2023.
In developmentSingapore's Monetary Authority has published the FEAT (Fairness, Ethics, Accountability, Transparency) principles for financial sector AI, alongside a national AI Governance Framework applicable across industries. Increasingly referenced internationally as a model for proportionate, outcome-based AI governance.
FEAT-alignedSaudi Arabia's National Data Management Office has established an AI ethics and governance framework as part of Vision 2030's digital transformation agenda. Organisations operating in KSA — particularly in government, healthcare, and financial services — must align AI systems with NDMO principles on fairness, accountability, and transparency.
Vision 2030The UAE's National AI Strategy 2031 positions the country as a global leader in AI adoption. The government has issued sector-specific AI guidance and ethics principles, with ongoing regulatory development expected across financial services, healthcare, and public services. Early alignment positions organisations favourably for UAE market access.
Strategy 2031Our team is fluent across the full landscape of international AI standards, ethics frameworks, and regulatory requirements — so your programme is built on authoritative foundations.
Every engagement is scoped to your organisation's maturity, sector, and regulatory obligations — and built to be sustained, not presented and filed.
Systematic assessment of AI systems for fairness, bias, societal harm, and fundamental rights impact. We surface risks before deployment and build remediation plans proportionate to each system's risk tier and operational context.
A structured evaluation of AI system impacts on fundamental rights as required under the EU AI Act for high-risk applications. We conduct FRIAs that are legally defensible, thorough, and integrated into your broader AI risk documentation.
End-to-end risk scoring across the full AI lifecycle — from training data and model selection through deployment, monitoring, and decommissioning. Outputs include a risk register, risk heatmap, and prioritised mitigation roadmap.
A rigorous gap analysis measured against ISO 42001, the EU AI Act, NIST AI RMF, and any sector-specific standards applicable to your organisation. Delivered with a structured remediation plan, effort estimates, and a compliance roadmap.
We design the governance architecture your AI programme needs to operate responsibly at scale — including AI oversight committees, escalation paths, role-based accountability frameworks, and the policy and procedure library required by ISO 42001 and the EU AI Act.
A comprehensive mapping of every AI system, model, tool, and third-party AI component across your organisation. Each system is risk-classified against EU AI Act categories and applicable standards, producing the inventory baseline that all governance work depends on.
Hands-on implementation of technical safeguards: explainability tooling, monitoring pipelines, human-in-the-loop review mechanisms, bias detection integrations, and audit logging — built into your AI systems rather than layered on top.
DAMA DMBOK-aligned frameworks for data quality, data lineage, and data lifecycle management specifically tailored to AI use cases. We ensure the data underpinning your AI systems meets the traceability, quality, and documentation requirements of the EU AI Act and ISO 42001.
Start with a complimentary AI Risk Assessment — we'll review your current AI portfolio and identify your highest-priority governance gaps.
Start with a Free AI Risk Assessment