Stanton Chase
Governance, Leadership, and Culture in the AI Era: Insights from the C-Suite

Governance, Leadership, and Culture in the AI Era: Insights from the C-Suite

February 2026

Share:

Video cover
Summary:

This article presents findings from a CxO Roundtable co-facilitated by Victor Filamor, Partner at Stanton Chase, and Egbert Schram, CEO of The Culture Factor Group. Senior executives from multiple industries examined how organizational culture determines the success or failure of AI adoption. Key findings include that 83% of leaders believe psychological safety improves AI outcomes yet fewer than half rate their own organization highly, and only 39% of organizations report measurable financial impact from AI. 

Most organizations are pursuing AI adoption on cultural foundations that are not yet fully stable. 

That uncomfortable truth surfaced repeatedly during a CxO Roundtable held in late 2025, facilitated by Victor Filamor, Regional Sector Leader for Consumer Products and Services in APAC and Partner at Stanton Chase, together with Egbert Schram, CEO of The Culture Factor Group. The conversation moved beyond technical readiness into the messier, more human territory of cultural traits, behavioral change, and the widening gap between intention and execution. 

The participants represented a cross-section of industries and geographies: luxury goods, consumer retail, logistics, technology, and financial services. Each was asked to reflect on five questions that cut to the heart of what boardrooms across the world are wrestling with: how does organizational culture shape an enterprise’s ability to extract real value from artificial intelligence? 

Which Cultural Traits Speed Up or Slow Down AI Progress?

When asked which organizational traits most powerfully accelerate or anchor AI initiatives, the executives pointed to a familiar set of forces. Communication styles surfaced repeatedly. One leader whose company is expanding internationally observed that “company culture emphasizes speed, without much planning of the deliverable, often using chat platforms to achieve transparency.” In other words, the prevailing work style prizes rapid execution over detailed documentation, with employees relying on instant messaging to keep everyone informed in real time. The result is openness, but often at the expense of structure. In this context, AI has an obvious role: “I think AI will have a great part to play here by providing more detail than what others are using at the moment.” 

Another executive framed competitiveness as the strongest accelerant, while a third pointed to agility and willingness to experiment: “Our agility and willingness to experiment, rooted in our playful brand ethos, is the greatest accelerant for rapid AI prototyping in areas like design and marketing.” The anchors, predictably, are the inverse: “a lingering hierarchical communication style in some departments, which can slow the open feedback loops needed to refine AI outputs.” 

For global luxury companies, the picture is more complicated. One executive noted that while access to rich consumer data, an entrepreneurial mindset, and a consumer-centric orientation all act as accelerants, the organizational priority placed on protecting individual brand equities “can slow experimentation and scaling of AI solutions.” The very cultural trait that safeguards the brand becomes a friction point when trying to move quickly with new technology. 

Several leaders stressed the importance of psychological safety: the freedom to question, to try, to make mistakes and learn from them without fear of retribution. One described a culture where “everyone has their own place. We are not equal, but each one has a role in the performance of the organization and its success, like an orchestra.” This, the executive argued, is the foundation for any technology adoption that changes how an organization operates. 

A joint study from MIT Technology Review and Infosys surveying 500 business leaders found that 83% believe a company culture that prioritizes psychological safety measurably improves the success of AI initiatives. Yet fewer than half rate their organization’s current level of psychological safety as “very high,” suggesting many enterprises are building on shaky ground. 

How Intentionally Are You Designing Culture to Treat AI as a Collaborator?

If culture is the operating system of an organization, how intentional are companies about programming it to treat AI as a partner rather than a utility? The answers suggest that most organizations are still in the early stages. 

One executive described launching “AI Co-Creation Sprints,” where design and marketing teams are incentivized to brainstorm with AI tools, not just use them for execution. “This changes the mindset from using AI for automation to partnering with it for ideation.” Another is working to simplify decision processes and embed AI into cross-functional decision-making so that “it becomes a trusted partner in creativity, consumer insights, and operational excellence.” 

Other leaders took a more measured view of where their organizations stand. “We are not yet advanced enough to bring in AI as a cultural integrator,” noted one, while another observed that their organization still treats AI primarily as “a tool” for marketing content creation and small coding tasks. A third described introducing “a basic set of general AI guidelines” from headquarters, a pragmatic starting point that acknowledges the reality of varying adoption speeds across regions. 

The most practical answer came from a banking executive: “It’s in using AI in our daily work, even in simple tasks like compiling emails.” Small habits, repeated daily, may do more to normalize AI collaboration than any top-down initiative. 

This instinct aligns with what McKinsey found when studying how organizations can overcome AI adoption challenges: the middle layer of most organizations, the managers and senior practitioners who set the cultural tone, is often the most resistant to change. They’re busy, their current methods work reasonably well, and the learning curve for new technologies can feel daunting. The organizations that succeed don’t just reward AI usage; they reward learning, competency-building, and helping others along the way. 

What Human Value Are You Actively Protecting?

As AI takes on more cognitive work, what becomes of the distinctively human contribution? This question cuts to the heart of leadership anxiety about the technology. 

The executives offered a range of answers. One luxury goods leader described the C-Suite actively reframing their teams’ core human value as “creative curation and emotional resonance: the taste and empathy to edit AI-generated designs and campaigns to perfectly capture our brand’s playful, relatable spark.” Leadership workshops now celebrate this “human-in-the-loop judgment.” 

Others were more direct: “The main thing that our people can deliver is creating trust through relationships with our customers and prospects. No AI can replace that.” This executive uses AI to draft emails or learn about prospect organizations, but insisted that “eventually, it is all about human relationships.” 

A senior leader at a global company took a longer view, noting that relationship-based C-Suite management styles “are still achieved socially through connections, alignment, and reaffirmation of those connections on a constant basis.” In many business cultures, the human work of building and maintaining trust remains firmly outside the reach of algorithms. 

This emphasis on human skills is not sentimental. A 2025 Workday study of global employees found that 83% believe AI will make uniquely human skills even more valuable. The skills they identified as least likely to be replaced, such as ethical decision-making, relationship building, and emotional intelligence, map closely to what the executives in the roundtable described: judgment, empathy, creativity, and the ability to inspire trust. 

Several leaders mentioned storytelling and entrepreneurial spirit as qualities that technology amplifies rather than replaces. One C-Suite is actively “encouraging creative ambiguity and out-of-the-box culture, alongside the blending of mature practices with new ideas.” Another is “positioning AI as a tool and enabler that makes life easier, not a replacer.” 

How Are You Measuring Cultural Success Beyond Technical Metrics?

How do organizations know if their AI initiatives are working in cultural terms? The executives acknowledged this remains largely uncharted territory, which is understandable given how new this field is. 

One leader looks for “a visible increase in cross-departmental play-testing of AI prototypes and positive sentiment in customer feedback regarding hyper-relevant personalization.” The behavioral change to watch: “teams voluntarily sharing their AI-augmented workflows as best practices.” 

Several others noted that they are still developing their approach. “For employees, we observe how AI is changing collaboration patterns and whether teams are more willing to experiment and use AI as a partner in their daily work.” On the customer side, success is measured through the number of use cases, quality of plans, and the extent to which AI-driven insights translate into sales or loyalty. 

More pragmatic executives are measuring speed and resource reduction. “In our current cost pressure environment, we look at two pure targets: increase speed and reduce resources through the use of AI.” A banking executive described using “post-AI engagement forums to discuss pros and cons” as a way to gauge adoption and surface issues. Another organization is test-bedding AI across different functions, with HR teams in each country tasked with completing an AI-related project for 2026. 

The challenge of measurement is not unique to these executives. McKinsey’s 2025 State of AI report, based on nearly 2,000 respondents across 105 countries, found that only 39% of organizations attribute any measurable EBIT impact to AI use, and among those, most report less than 5% of their organization’s EBIT is attributable to AI. The gap between AI adoption and AI impact has never been wider. Nearly two-thirds of respondents say their organizations have not yet begun scaling AI across the enterprise; they’re still experimenting or running pilots without full integration into workflows. In other words, these roundtable participants are wrestling with exactly the same questions as their peers around the world. 

Where Is the Gap Between Stated Intentions and Day-to-Day Reality?

Perhaps the most telling question asked executives to identify where the widest gap lies between their organization’s stated AI intentions and its day-to-day cultural reality. Their willingness to answer honestly speaks to the self-awareness that effective leadership requires. 

“The largest gap is between our stated value of fearless innovation and a day-to-day cultural reality where some teams still hesitate to trust and validate AI-assisted outputs,” said one leader. Another described the gap between “our stated intention to treat AI as a true collaborator in creativity and consumer engagement, and the day-to-day reality where adoption remains cautious and slow.” 

A luxury goods executive noted that many teams still view AI “primarily as a technical tool rather than a partner, and they don’t always know how to best use it.” Leadership is addressing this by publicizing case studies, building a culture of experimentation, and investing in training that builds confidence. 

For global organizations, the challenge is coordinating uneven progress. “The main gap is between our vision of inclusive AI adoption and the varying rollout speeds across teams.” Regional teams are closing it “by proactively addressing the topic across different hierarchies and divisions to motivate everyone to share ideas.” 

One executive summed up the broader challenge: “I think it is mainly about the ability to change processes and let go of the old way of doing things. Every organization is reluctant to change, especially due to cultural fixation.” 

This kind of organizational inertia is not a shortcoming unique to these companies. A McKinsey analysis of over 3,600 employees and 238 C-level executives found that employees are ready for AI, but the biggest barrier to success is leadership. C-suite leaders are more than twice as likely to blame employee readiness for adoption failures as they are to examine their own role. Meanwhile, employees report wanting more formal training, access to AI tools, and clearer communication about how AI will and won’t affect their jobs. The executives in this roundtable, by honestly naming the gaps, are already taking the first step toward closing them. 

Where Leaders Go from Here

What comes through in these conversations is a leadership cohort grappling with a challenge that has no established roadmap. AI is being used to produce documents, draft emails, generate marketing content, and achieve operational efficiencies. But the harder work of cultural integration, of making AI a collaborator in judgment and creativity, remains ahead. This is not a failure; it is simply where most organizations are right now. 

The candor these executives showed is itself a sign of healthy leadership. They are asking the right questions, acknowledging the gaps between aspiration and reality, and experimenting with solutions even when the path forward is unclear. That willingness to be honest about what isn’t working yet is exactly what psychological safety looks like in practice. 

The executives who seem furthest along share a common trait: they are actively modeling imperfect AI use, celebrating failed experiments, and creating forums for open discussion. They understand that the technology is only as good as the culture that surrounds it. 

McKinsey’s research on AI high performers, the roughly 6% of organizations that report both broad AI adoption and measurable financial impact, points to what separates them from the rest. They are three times more likely to say that senior leaders demonstrate clear ownership and commitment to AI initiatives, including role-modeling the use of AI themselves. They don’t just deploy tools; they redesign workflows, set growth objectives alongside efficiency targets, and create processes to determine when AI outputs need human validation. 

For the executives in this roundtable, and for leaders facing similar questions in their own organizations, the path forward involves three things: first, creating the psychological safety that allows people to experiment and fail without fear; second, being visible and honest about their own AI learning journey; and third, measuring not just efficiency gains but the behavioral changes that indicate AI is becoming part of how people work day to day. 

As one roundtable participant put it, describing the organizational reluctance to change: “AI is just one example of a need to change.” The technology may be new, but the cultural work is as old as management itself. And these leaders, by their willingness to engage with these questions openly, are already further along than they may realize. 

About the Authors

Egbert Schram is the Group CEO of The Culture Factor Group and a global authority on cultural analytics. Originally from the Netherlands and now based in Finland, he leads a global organization active in more than sixty countries. His unconventional path, from aspiring Dutch Marine to studying forest management and environmental psychology, gave him what he describes as a “forester’s pragmatism”: a way of thinking that values ecosystems over silos, long-term resilience over short-term optimization, and observable behavior over slogans. Egbert uses data to bridge the gap between executive intent and the lived experience of employees, challenging leaders to see how their own behaviors silently reinforce or undermine the systems they seek to change.  

Victor Filamor is a Partner at Stanton Chase Greater China, serving as the Regional Sector Leader for Consumer Products and Services in Asia Pacific. With 25 years of corporate experience across the Asia Pacific region and over 15 years as a retained executive search consultant, Victor has successfully placed numerous senior management and C-suite executives throughout Asia. A certified professional coach specializing in leadership and career transitions, Victor holds an MBA in Marketing from Greenwich University in Hawaii and graduated cum laude with a B.Sc. in Chemistry from the University of the Philippines, where he topped the National Chemistry Licensure Board Examinations.      

About The Culture Factor Group

The Culture Factor Group is a cultural analytics and strategy advisory present in more than 60 countries. Founded in 1985 with the support of Prof. Geert Hofstede, the firm has spent four decades turning academic research into practical tools that help organizations work through cultural complexity, from M&A integration and market entry to diversity, psychological safety, and beyond. Through a global community of certified practitioners, they combine behavioural analytics with hands-on facilitation to build culturally aware and sustainable workforces. 

AI & Technology

How Can We Help?

At Stanton Chase, we're more than just an executive search and leadership consulting firm. We're your partner in leadership.

Our approach is different. We believe in customized and personal executive search, executive assessment, board services, succession planning, and leadership onboarding support.

We believe in your potential to achieve greatness and we'll do everything we can to help you get there.

View All Services