ISO/IEC 42001:2023 ushers in a new era of trusted agentic automation 

Share at:

ISO/IEC 42001:2023 ushers in a new era of trusted agentic automation 

Scaling agentic automation with trust, not risk

Agentic automation has ignited excitement across enterprises. Leaders see the potential: faster decisions, adaptive workflows, and productivity at scale. But with that excitement comes hesitation. CISOs and CIOs tell me the same thing:

We can’t move forward without complete confidence that our data will never be exposed, misused, or pulled into model training without our explicit consent. How do you build and govern AI responsibly, securely, and transparently—so that this never happens?

That’s exactly what ISO/IEC 42001:2023 delivers.

As the world’s first international AI management system standard, it adds an important piece to the broader trust puzzle, providing third-party assurance to executives that AI is built and governed responsibly, securely, and transparently.

A new benchmark for responsible AI

Two decades ago, ISO 27001 became the gold standard for information security. Today, ISO/IEC 42001:2023 sets the same benchmark for AI, demanding strict lifecycle governance across security, fairness, transparency, and accountability.

For enterprises, this means trust no longer rests on a vendor’s promise. It comes with independent validation that AI partners meet the world’s most rigorous standards. In practice, that translates AI being built and managed responsibly into what every enterprise cares about:

  • Protecting sensitive data: critical information won’t be exposed or misused. one of the biggest concerns executives currently bring up

  • Reducing risk: InfoSec, legal, and compliance teams gain third-party assurance

  • Faster adoption: assessments and audits move more quickly, allowing enterprises to confidently deploy agentic automation where it matters most: to critical and heavy processes that really make a difference when automated

The platform-level certification that UiPath obtained is the perfect example of validated trust: we aimed to obtain the necessary external confirmation for our customers, to prove that our current approach to responsible AI already meets the highest global benchmark.

To make sure we truly put our approach to the test, we partnered with the first ISO/IEC 42001:2023 certification authority recognized by ANAB in 2024, Schellman, who are well known for their rigor and quality.

Total validation = fully trusted AI

For enterprises, certifications only matter if they reflect how AI is used across the entire artificial intelligence management system (AIMS), not just in isolated services, but in full context. Narrow certification scopes leave gaps that erode trust and create hesitation among executives.

ISO/IEC 42001:2023 was created to close that gap. It validates how organizations govern the AI lifecycle: from design, development, and deployment to ongoing monitoring and accountability. That includes safeguards for data protection, fairness, transparency, and human oversight.

For enterprises, the signal is clear: AI isn’t governed on a vendor’s promise, but proven through an independent, rigorous framework. Pairing this signal with the other standards will remove doubt and hesitation.

ISO/IEC 42001:2023 certified by Schellman

“In April 2025, UiPath Platform™ for agentic automation was released publicly. By May, driven by our confidence in the governance built-in from day one, we had already kicked off ISO/IEC 42001:2023 certification with our audit partners, Schellman, not for a fragment, but for the entire UiPath AIMS scope. By September, we became one of the first enterprise automation vendors in the world to achieve this certification at a platform-level scale.

Our customers have demanding use cases; they need assurance across the entire UiPath Platform. Our broad scope certified by ISO/IEC 42001:2023 and our other compliance certifications gives them just that: assurance that the governance we’ve built into the entire automation fabric from day one meets the most rigorous international standards,” explained Sheron Chakalakal, Head of GRC at UiPath.

AI governance for real-world scenarios

I can’t speak to how other AI and enterprise automation vendors look at this certification, but from our perspective, it’s the perfect external validation for our customers to see that responsible AI has been a guiding principle from the start of our product development.

Everything we’ve built since we started on the agentic automation path was built on top of our historical strengths: safeguards for data protection, human oversight, and technical robustness in mind. ISO/IEC 42001:2023 is the best external validation that the practices we’ve implemented from day one live up to (and even exceed) the world’s most demanding standards.

You can read more about our detailed approach in our responsible AI principles, which outline how we’ve embedded ethics, privacy, and security into every AI-powered capability we deliver—from day one.

UiPath Governance Framework

Our built-in multi-layer governance framework is designed to go deeper, ensuring that every layer—agentic, IT, and infrastructure—is held to the highest standards of trust, control, and accountability.

  • Agentic governance: guardrails that keep agents, robots, and workflows safe, with jailbreak defenses, personally identifiable information (PII) protection, and policy-first controls

  • IT governance: policies that protect data regardless of actor, with data loss prevention, unified audit trails, and natural language policy creation via UiPath Autopilot™

  • Infrastructure governance: encryption, customer-managed keys, and strict data residency to meet sovereignty and compliance needs

At the macro level, the AI Trust Layer provides a centralized console to manage enterprise-wide controls like choice of LLM modes and personally identifiable information (PII) safeguards. Its purpose is to give enterprises confidence that AI adoption happens in a safe, compliant, and transparent way. Features like policy-aware execution, model transparency, and configurable data retention ensure AI assistance is secure and compliant by design.

We believe agentic automation must be both transformative and trustworthy. That means:

  • Customer data is never used to train external LLMs

  • AI traffic is routed and governed across multiple providers (Azure, OpenAI, AWS Bedrock, Google Vertex AI, Anthropic, etc.)

  • Customers have configurable controls for UiPath-managed models

  • AI Trust Layer enforces security and compliance policies such as PII masking, access control, and encryption

ISO/IEC 42001:2023  confirms these practices today, and our promise is that we will keep evolving our governance to stay ahead of tomorrow’s risks.

Let’s look at a couple of real-world, high-stakes scenarios where all these controls make a difference:

Financial services: guardrails that protect sensitive data

A global bank deploys agentic automation to accelerate credit risk analysis. An AI agent gathers financial records, synthesizes insights, and recommends approval thresholds. Without governance, the agent could inadvertently query unsecured sources or expose PII.

With UiPath governance in place:

  • Agent guardrails policies will block AI agents from crossing approved data boundaries and escalate exceptions to human-in-the-loop review

  • PII detection and in-flight masking ensure sensitive data detection and pseudonymization before the LLM call is made

  • LLM restriction policies will determine that only specific models will be used, ensuring that data compliance won’t be breached

  • Unified audit logs provide full trace logging of each agent prompt, output, decision, and action taken

The result: faster, more consistent credit decisions without sacrificing data security or regulatory compliance.

Healthcare: human-in-the-loop for critical workflows

A hospital system uses agentic automation to triage patient intake notes and provide treatment options. Agents quickly analyze the intake notes, summarize medical histories and flag urgent cases. But misclassification could carry severe consequences.

With UiPath governance in place:

  • Runtime guardrails perform immediate human-in-the-loop escalation when detecting any hinting to a critical case, supporting clinicians with the relevant contextual information

  • PII and protected health information (PHI) detection plus in-flight masking policies keep records within the designated data spaces

  • Explainability logs show each decision logic, offering the automation lead everything they need to optimize the system further

The result: clinicians save time, and patients are prioritized more accurately. Agentic automation strengthens care without undermining trust.

Together, these scenarios highlight how governance moves from principle to practice, showing enterprises that safeguards are not abstract policies, but tangible protections in the moments that matter most. 

In conclusion...

Enterprises no longer have to choose between innovation and safety. ISO/IEC 42001:2023 sets a new baseline for trust in AI: removing hesitation, clarifying accountability, and giving executives confidence to scale without compromise.

But standards are only the beginning. What will define the next decade is how organizations turn principle into practice, embedding governance not as an afterthought, but as the foundation for every agent, every workflow, every decision.

For executives, the message is clear: the freedom to innovate is now inseparable from the responsibility to govern. Those who act on this truth will not just accelerate adoption; they will lead their industries into a future where AI is both transformative and trustworthy.

TopicsLABEL.text

Security

Get articles from automation experts in your inbox

Subscribe
Get articles from automation experts in your inbox

Sign up today and we'll email you the newest articles every week.

Thank you for subscribing!

Thank you for subscribing! Each week, we'll send the best automation blog posts straight to your inbox.

Ask AI about...Ask AI...