We’re not afraid of AI. We’re already using it—to write smarter emails, summarize meetings, streamline tasks, and more. What excites us isn’t just what AI can do, but how naturally it fits into the way we think and work.
I’m part of the generation that’s grown up with technology in our pockets and generative AI tools at our fingertips. We’re curious. We’re eager. We’re experimenting. Yet in many workplaces, we’re also cautious. Not because we don’t want to embrace AI, but because we don’t know if we’re allowed to.
To the executives reading this: if you want to build AI-fluent, future-ready teams, you need more than guidelines and tools. You need cultures where learning is encouraged, mistakes are safe, and curiosity is celebrated.
Gen Z is ready for AI. The real question is, is your workplace ready for us?
We don’t need convincing. We’re already using AI.
According to Deloitte, the majority of Gen Z and millennial AI users believe it saves time, improves the quality of work, and enhances work-life balance. That reflects growing comfort and a strong willingness to use AI in meaningful ways.
A 2025 McKinsey survey further supports this, concluding that employees are more receptive to AI than leaders realize, with Gen Z frequently using AI and expressing eagerness to build skills. But despite our enthusiasm, many of us still use AI quietly. Why?
Because we're not always sure how our experimentation will be perceived.
In some environments, it’s unclear whether trying new tools is encouraged or considered risky. That ambiguity leads to hesitation. Instead of asking questions or sharing what we’ve discovered, we often keep our AI use to ourselves.
We don’t need permission to be curious. But we do need clarity and encouragement to explore that curiosity openly and confidently.
Gallup found that more than half of Gen Z employees say their employer lacks a clear AI policy. Of those who do have one, only a fraction feel confident it’s clear. The result? We hesitate—not because we don’t want to use AI, but because we’re unsure what’s appropriate, what’s encouraged, and what might be seen as overstepping.
A new report from MIT Media Lab's NANDA Initiative, "The GenAI Divide: State of AI in Business 2025," shows just how big the gap has become. Only 40% of companies report buying workplace AI subscriptions, yet employees at 90% of those same companies say they are already using personal tools like ChatGPT to get their work, often multiple times a day, even as official company pilots remain stalled.
That reality reinforces what McKinsey found. Employees are already leaning on AI in their day-to-day work, often more than leaders realize. Thirteen percent of employees say they use AI for over 30% of their work, while only 4% of executives think that is happening.
The disconnect is real, and it does more than create confusion. It undermines confidence.
Gen Z grew up experimenting with new technology, but in the workplace, that instinct is muted when using AI feels risky. What we need is not just training or tools. We need cultures where it is safe to explore, to ask questions, and to share what we are learning.
Training matters, but it isn’t the main bottleneck. According to Deloitte, 51% of Gen Z professionals have received some form of AI training. Yet most still say they don’t feel confident applying those skills in high-visibility projects without clear manager buy-in.
When employees aren’t confident that using AI is culturally safe, they don’t share what’s working. They don’t ask for feedback. Instead, they experiment quietly, resulting in fragmented insights, lost opportunities for collaboration, and limited organizational learning.
McKinsey reports that only 1% of companies today consider themselves “AI mature.” A major reason is lack of cross-functional alignment and a culture that embraces trial and error.
The 2025 Edelman Trust Barometer found a "21-point gap in comfort with AI adoption between employees with low grievance levels (50%) and those with high grievance levels (29%)." As trust in organizations erodes, so too does confidence in the tools those agencies promote.
For Gen Z, this is especially challenging. Research from Jonathan H. Westover, PhD, shows that while our generation feels generally comfortable with AI, we often carry heightened anxiety about job security and the consequences of making mistakes. We want to be bold, but we’re still early in our careers; unclear boundaries make boldness feel risky.
We’re not asking for AI experts in every meeting. What we need are leaders who foster cultures where experimentation is safe, where failing fast isn’t punished, and where trying something new isn’t seen as overstepping.
Because at the end of the day, the biggest blocker to AI engagement isn’t a lack of knowledge. It’s fear of failure.
Companies that embrace “fail-cheap, learn-fast” mindsets see stronger adoption and deeper experimentation with AI tools. According to Time reports, experimental cultures outperform when it comes to innovation and resilience.
This only works when failure is transparent and normalized.
When employees fear consequences for honest missteps, they retreat into silence, and that silence cuts off progress before it can begin.
For early-career employees, especially, silence around failure can create self-doubt and stifle creativity. If we don’t talk about what didn’t work, we miss the chance to discover what might. This is where psychological safety becomes essential.
A decade ago, Google’s Project Aristotle famously found that psychological safety is the single most important driver of team success. Boston Consulting Group's research found that the "positive effects of psychological safety are particularly pronounced among women, people of color, LGBTQ+ employees, people with disabilities, and people from economically disadvantaged backgrounds."
Psychological safety is what empowers people to speak up, share ideas, and take intelligent risks. And it must be actively cultivated (not assumed).
I think the best thing you can say about failure is if you have a culture that permits failure, that tolerates failure, it means you’re stretching, you’re pushing, you’re trying to innovate, you’re trying to do things that are difficult.
Adi Ignatius, Editor in Chief, Harvard Business Review
Creating a high-trust culture isn’t a “nice to have” for AI readiness. In my opinion, it’s the foundation.
If leaders want AI to scale across their organizations, they must lead by example—narrating their own experimentation, owning their own learning curves, and creating space for others to do the same. That means shifting the conversation from “compliance” to “curiosity,” from “don’t mess up” to “let’s see what we learn.” And that’s not just true for Gen Z.
This is your opportunity, not just to empower a generation, but to unlock the full value of AI across your workforce. Here’s how:
Talk openly about where you’re experimenting with AI. Share what’s working, and what isn’t. When leaders model exploration, it gives the rest of us permission to do the same.
Publish clear, accessible guidelines about what’s encouraged, what’s off-limits, and where employees can go to learn. Remove ambiguity so we can focus on discovery, not worry.
Normalize mistakes. Recognize when someone tries something new, even if it doesn’t go perfectly. That’s how innovation compounds.
Gen Z doesn’t need a perfect playbook. We need leaders who cheer us on, show us how to fail forward, and remind us that curiosity is an asset, not a liability.
This isn’t just about Gen Z. It’s about shaping a workplace where the future of work thrives—where trust, experimentation, and innovation go hand in hand.
The AI generation is ready. The question is, are you ready for us?
Dive deeper:
2025 Brand Intern, UiPath
Sign up today and we'll email you the newest articles every week.
Thank you for subscribing! Each week, we'll send the best automation blog posts straight to your inbox.