Q&A with ROSANNE WERNER - XCELERATEIQ
- Craig Godfrey
- Nov 15, 2025
- 9 min read

We sit down with Rosanne Werner, CEO of XcelerateIQ to congratulate her as the winner of our People and Culture in Data and AI Award

>>> Intro & Background
Tell us a little bit aboutyourself, your career and your current role at XcelerateIQ
My story spans continents as much as industries. Born in Hong Kong, raised in Australia, and now based in London, I’ve spent my career leading transformation.
My career started in finance, qualifying as a Chartered Accountant and leading finance strategic initiatives in pharmaceuticals, oil and gas, and mining. My work included rolling out enterprise resource planning systems, embedding international reporting standards, designing SOX-compliant controls, and reinforcing governance frameworks in highly regulated environments.
At Coca-Cola Europacific Partners, I was asked to bring the people element into data and AI transformation. My highlight of that time was the “Data in Action” programme, which reached more than 2,000 “Data Catalysts” across functions and geographies. We trained and empowered these employees - embedded in their teams - to champion and influence new habits and mindsets around decision-making. That experience showed me that sustainable transformation isn’t about more tools. It’s about confidence, curiosity, and culture.
After a decade driving change at Coca‑Cola, I launched XcelerateIQ to make a broader impact across industries: building data and AI fluency, embedding data habits, creating safe spaces for experimentation, and building cultures where people are curious, confident, and empowered to turn data and AI into everyday value. We don’t just train people; we create experiences that rewire how they think and feel about data.

>>> Bridging the human–tech divide
You often integrate behavioural science into data/AI change. Which one or two behaviours are the real unlock for adoption at scale, and how do you measure that shift?
The real unlock is less about technical skills and more about human habits. Two behaviours stand out above all others.
The first is the habit of asking better questions of data. It sounds deceptively simple, but it changes the dynamic entirely. When managers and teams make data part of everyday conversations: “what does the evidence show?” or “how do we know this?” it creates a cultural pull where data and AI naturally find their place in decision‑making. That shift comes directly from behavioural science: curiosity triggers our brain’s reward system, releasing dopamine when we uncover answers, which makes people want to repeat the behaviour. Once that becomes habit, adoption spreads far faster than any top‑down mandate could achieve.
The second is building confidence, not just competence. Neuroscience shows us that when people feel psychologically safe, they’re more likely to take risks, try new tools, and form new neural pathways that embed those behaviours. Data programmes often only focus on literacy, teaching concepts, dashboards and tools, but if people don’t feel confident, they won’t apply any of it. We design “confidence-first learning” experiences, where small wins with data are celebrated and reinforced, so people associate data with success rather than fear.
As for measurement: it’s so easy to count training session attendance and certificates issued, but how are you measuring that the learnings are put into action? The real signals are behavioural: Are teams using data in meetings? Has the language shifted, are people asking for insights before committing to decisions? Do managers model the behaviour by showing how AI influenced their choices? These are observable behaviours that tell us adoption is real.
It’s those micro-shifts: better questions and genuine confidence, that unlock adoption at scale. Once they take root, the technology stops being “new” and simply becomes part of how work gets done.
>>> Strategy to Value
When you enter an organisation, how do you translate “AI strategy” into the first three business outcomes with measurable ROI—and what’s your favourite KPI for early traction?
The key is to turn ideas into steady action fast. When I walk into an organisation, my first question isn’t “What’s your AI strategy?”, it’s “Where’s the friction in your business right now?” If we can identify the pain points that matter to both people and performance, that’s where AI can deliver early visible wins that build momentum and trust.
Those first wins must be confidently owned and trusted by the people using them. If employees don’t trust the insights or understand how AI adds value, you’ll lose that early traction. Focus must be on use cases that are transparent and people-focused: solving everyday frustrations, saving time, or improving accuracy, because that’s where confidence grows fastest.
From there, I anchor outcomes in threekey areas:
>>> Productivity and efficiency: cutting repetitive manual work or accelerating decision cycles.
>>> Accuracy and risk reduction: improving forecast precision, compliance, or error rates
>>> Customer and employee experience: where AI makes life easier and satisfaction rises. Happier customers spend more; engaged employees perform better.
Each one ties to a measurable result such as hours saved, error rates reduced, cycle times shortened, satisfaction scores improved, or incremental revenue captured. Over time, those metrics compound into financial ROI, but the early signal of success is far simpler: when people start saying things like “This saves me so much time,” “Now we can make decisions faster,” or “I actually trust the numbers.” That’s when you know the strategy is landing where it matters: inside everyday work.
My favourite early KPI is time‑to‑decision. It’s a telling indicator that show how quickly people can move from insight to action. When teams start making decisions faster and feel confident in the outcome, it’s a clear sign the technology has become part of their day-to-day roles. That’s when you know the investment is paying off. AI is no longer a project on the side, it’s integrated as part of how the business operates.
>>> Playbook from large-scale programmes
From your experience leading mindset and enablement programmes, what repeatable tactics—champions, micro-learning, incentives—actually moved the needle, and what would you do differently now?
Large‑scale mindset and enablement transformation isn’t a one‑off campaign; it’s a continuous cycle of learning, reinforcement, and shared success stories that make progress feel real.
First, design learning experiences that work with the brain, not as a tick-box exercise. People remember best when learning is delivered in small, repeated, and engaging bursts, rather than long, one‑time events.
Our approach draws on neuroscience techniques proven to make learning stick:
>>> Micro‑learning – short, focused sessions that fit naturally into the flow of work.
>>> Spaced repetition – revisiting ideas at intervals to strengthen neural connections and prevent the forgetting curve.
>>> Retrieval practice – helping people recall and apply what they’ve learnt through questioning, discussion, or quick scenario challenges.
We pair these with gamification such as badges, team challenges, leaderboard, to activate the brain’s reward system and make learning enjoyable. Each small win releases dopamine, reinforcing the habit of coming back for more.
Next, change takes hold when people learn by doing and learn together. Practical application turns theory into habit, while collaboration builds accountability. Teams work on real business scenarios, share results openly, and learn from each other’s experiments. Within this, ‘Data Catalysts’ - trusted peers who model the desired behaviours - keep this momentum alive. They act as local advocates, translating ideas into context and supporting colleagues as they build confidence. When someone sees a peer succeed, progress feels achievable rather than intimidating.
Behavioural change only lasts in a supportive environment. That’s where leadership advocacy, consistent communication, and community collaborations come in. Leaders set the tone by showing curiosity, asking data‑informed questions, and recognising teams who do the same. Multi‑channel communication such as short stories, quick wins, spotlight features, keeps the conversation visible and relatable. Communities of practice then sustain momentum long after training ends, creating spaces to share challenges, ideas, andsuccess stories.
If I were to do things differently, I’d bring the data and technical teams even deeper into the transformation journey from the very start. Too often, cultural and mindset initiatives are aimed at the business side, while data and tech teams stay in the background as enablers. But they’re just as much a part of the change.
When engineers, data scientists, and architects participate in mindset and behavioural programmes alongside frontline business teams, something meaningful happens: a shared language forms. They stay closer to the frontline challenges, see how their solutions are used, and develop a stronger sense of business ownership. It also keeps them accountable, not just for delivery, but for outcomes that truly matter to the people using their products.
This approach turns data and technology functions into true partners in the transformation, rather than a support service on the sidelines. It ensures that AI and data initiatives are built intentionally to address real business problems, rather than theoretical potential.
>>> Governance without slowdown
What lightweight guardrails (policy, tooling, skills) let teams experiment fast while staying safe—especially with generative AI and “shadow AI” usage?
Good governance shouldn’t feel like handbrakes on innovation. It should act more like lane markers that keep you moving in the right direction. The goal is to give people room to explore and be creative while ensuring the organisation stays safe, compliant, and ethical.
The first step is clarity, not complexity. Most employees don’t ignore governance out of defiance; they do it because the rules are vague, hidden, or filled with jargon. We design short, plain‑English “rules of play” that outline what’s acceptable, what’s risky, and where to go for support. A simple one‑page guide or short video usually works better than a forty‑page policy no one reads. The key is making compliance cues obvious—visible, simple reminders that guide people at the moment they need them.
Next comes tooling that guides, not polices. Approved, easy‑to‑access AI sandboxes give teams a safe environment to test ideas without fear of crossing a line. Built‑in usage monitoring, watermarking, or data‑classification prompts can nudge ethical behaviour automatically. Most of the time, people just need gentle reinforcement at the point of action, not a compliance email after the fact.
Equally important are skills and awareness. We run short “Tech Talk” sessions that teach teams how to question AI outputs, protect data, and recognise bias. These ‘learn together’ sessions give people the confidence to explore responsibly and learn from each other. When employees understand both the potential and the pitfalls, they self‑govern far more effectively.
Finally, leadership needs to set the tone. Executives should use generative AI openly, talk about how they fact‑check outputs, and model ethical experimentation. Visible role‑modelling builds trust faster than any policy.
The balance is simple: keep governance visible, human, and adaptive. Give people freedom within a clear framework, link every rule back to its purpose, and educate rather than restrict. When teams feel trusted and equipped, they move faster and safer than any locked‑down environment ever allows.
>>> Talent & org design
From a talent and organisation design lens, how can companies connect their data and AI strategy with their talent model?
Many organisations still treat their data and AI strategy as a technology initiative, when in reality it’s a people transformation. You can’t change how decisions are made without changing how people are developed, measured, and rewarded.
Organisations need to clearly define what data and AI fluency looks like across functions. Not everyone must build models, but everyone should feel confident questioning data and using insights. These expectations should be built into job profiles, development plans, and performance reviews so they’re part of how success is recognised.
Rewarding the right behaviours matters just as much as building the right skills. When employees use data to make better informed decisions, simplify work, or identify new opportunities, that behaviour should be acknowledged and celebrated. Recognition reinforces value, and sets the tone for the attitudes and behaviours that ultimately shape its culture.
Role clarity is equally important. It’s common to see responsibility for data quality, ethical use, or model outcomes sits vaguely “with the data team.” But in reality, every role, from analyst to executive, has a part to play in how data and AI are created, interpreted, and applied. Embedding expectations into job profiles and performance criteria removes confusion and eliminates “handover culture.” Instead of operating in silos and focusing on delivery, teams collaborate on shared outcome and overall impact.
>>> Industry Trends & Innovation
What AI trends do you expect to materially change enterprise operating models in the next 12–18 months—and which popular trend do you think is over-hyped?
From my perspective, the most important shift ahead will come from re‑engineering the human operating model around AI, not just the systems that enable it.
Organisations have invested heavily in building digital capability, but far fewer have rebuilt how their people, culture, and structures work with AI. The businesses that will set themselves apart are those that invest in the evolution of their ways of thinking as quickly as the technology itself. That means redefining roles, expectations, and learning so employees understand what it means to work in partnership with intelligent systems, where judgement, creativity, and ethical reasoning remain human strengths and AI handles the heavy lift of scale and speed.
We’re seeing a surge in companies designing hybrid workflows where people focus on questions, context, and relationships, while AI deals with pattern recognition and prediction. This new model changes everything from job design to leadership capability. Managers will have to move from directing work to designing problem‑solving environments, teaching teams how to challenge, verify, and refine AI output rather than execute tasks.
The other major part of the human operating model is trust. AI adoption only scales when people feel confident using it. Employees need transparency about how AI affects their roles, what data is being used, and how decisions are made. That psychological safety builds stronger adoption than any mandate.
What’s over‑hyped, in my view, is the idea that a next‑generation tool alone can transform how a business operates. Technology can enable change, but without parallel investment in people such as re‑skilling and upskilling, communication, ethics, and leadership alignment, AI will simply expose existing weaknesses faster. It won’t fix poor data habits, fragmented workflows, or a lack of trust in decision‑making; it will magnify them.
The real opportunity is around reshaping the human system around AI. Companies that get this balance will use AI to unlock the best of human potential, not replace it. Their people will feel equipped, informed, and trusted. Their culture will value curiosity, experimentation, and responsible innovation.
Most importantly, they’ll build organisations that can continually adapt, where learning never stops, and roles evolve as fast as technology does. They train leaders to spot new opportunities, build teams that can pivot quickly, and create systems that reward flexibility and learning over fixed expertise. In doing so, they futureproof their workforce, by creating people and cultures capable of shaping the future, not just surviving it.






Comments