GREG STEEL - Senior Chief Data Officer and Technology Leader
- Craig Godfrey
- Feb 25
- 5 min read

Scaling beyond isolated successes
There are interviews that feel transactional, and then there are conversations that feel earned. This one falls firmly into the latter. Greg Steel isn't just an observer; he's spent years embedded in complex, highly regulated enterprises where data is a structural dependency, the very bedrock of operations.
Our discussion sidestepped the latest AI hype cycle. Instead, we drilled into the persistent failures: Why do heavily invested organisations still struggle to scale? Why does proof-of-concept success rarely translate into durable enterprise value? And why are foundational elements like semantics, architecture, and governance reasserting their critical importance?
I sat down with Greg to unpack these challenges. What emerged was a rare, unvarnished look at what it truly takes to drive change and lead teams within the multi-layered structures of highly regulated organisations.
Q: What most consistently prevents large organisations from scaling data and AI beyond isolated successes, even after significant investment?
I have seen this play out repeatedly in large organisations. Individual data and AI initiatives succeed because they have strong sponsorship, a clear problem to solve, and a focused team. The problem arises when people assume those successes will somehow add up to enterprise-wide value. They rarely do, because incentive structures and operating practices have not been adjusted. Teams are still set up, funded, and rewarded to deliver for their own area, not to build things others can safely reuse.
This is often misread as an argument for greater centralisation. It is not. The issue is not the use of different tools or software solutions, but by allowing teams to redefine data meaning, controls, and data contracts independently. When that happens, local success quickly becomes enterprise fragility. AI accelerates this by surfacing inconsistencies far faster than traditional reporting ever did.
What scales in practice is disciplined decentralisation, clear, shared foundations and architectural guardrails, combined with local freedom to execute and innovate above them.
Data provisioning vs analytics delivery
Q: Why do enterprises often struggle when data provisioning and analytics delivery are treated as the same problem, and what breaks as a result?
Organisations often bundle these together believing it reduces cost and allows rapid delivery, and in the short term it works. Dashboards get built, insights flow, and there is a sense of progress.
Over time, however, data is shaped to answer specific questions rather than to represent something stable about the business. Every outcome requires dedicated data resources. Cost rises and bottlenecks appear. Controls are added late and inconsistently.
As soon as other parts of the organisation try to use and ultimately rely on that data i.e. Risk, Finance, or Operations, the cracks appear. Separating enterprise data provisioning does not slow teams down, it protects both speed and trust. Generative AI simply makes weak foundations impossible to ignore.
Shared semantic models
Q: How important are shared semantic models in building trust and reuse across risk, finance, operations, and commercial teams?
This sounds abstract until you see the consequences up close. I have spent time in organisations where everyone thinks they are talking about the same thing, until it becomes clear they are not. Even then, there is a reluctance to change due to the uncertain costs of doing so.
Shared semantic models create a common language. They do not constrain how teams work, but they do stop divergence at the level of meaning. That clarity removes friction, enables reuse, and becomes essential for Generative AI which at a minimum we know does not cope well with ambiguity.
There are a raft of new and emerging data technologies that are suddenly focused on Semantics. emphasising its criticality. I have been particularly impressed by a platform from Stratio, which treat semantic models as a core architectural layer rather than as documentation or metadata sitting off to the side. They were delivering this to tier one financial services organisations before Generative AI was around. What they have achieved with it at scale is notable.
When meaning is built into the platform itself, reuse and governance stop being optional behaviours and start becoming the default.
Durable AI value
Q: Where does AI deliver its most durable value when embedded into core data platforms rather than deployed as standalone use cases?
The AI use cases that scale most effectively are those embedded into data operations rather than isolated decision engines. Improving data quality, reconciliation, monitoring, and audit evidence all strengthen trust in the underlying data.
By contrast, AI used purely to automate decisions often struggles to survive scrutiny because weaknesses in the data foundations are exposed very quickly.
Why AI stalls at proof-of-concept
Q: Many AI initiatives stall at the proof-of-concept stage. What distinguishes the few that scale from the many that don’t?
Most proofs-of-concept are designed to demonstrate that something is possible, not that it is sustainable. Speed and technical performance are prioritised, while ownership, lineage, and explainability are deferred.
When audit or regulatory scrutiny arrives, those gaps become obvious. The initiatives that scale design for governance from day one and embed models into platforms rather than one-off solutions.
Buy vs build
Q: How should senior leaders approach buy-versus-build decisions for data and AI platforms when the pace of AI innovation is accelerating so quickly?
Buy versus build is often framed as a technology or procurement debate. In practice, it is a leadership question about where you want your best people spending their time.
I have seen teams spend years building platforms only to find the market or the business itself has moved on. There is often a lot of concern around losing control when you buy. However, control comes from architecture, shared meaning, and integration, not from owning every component. AI makes this mismatch even more visible.
This is also where external platforms can make a lot of sense when they are built around clear architectural principles. A platform like Stratio provides a strong enterprise data foundation, while more specialist vendors such as eXate focus deeply on specific problems like data privacy controls within the network. The problems these capabilities address sit at the heart of many enterprise data challenges. Both are examples of buying capabilities that would be extremely expensive and slow to recreate internally, without giving up architectural control. Business leaders must ensure the right choices are being made.
Metadata and governance
Q: Metadata and governance are perennial priorities yet often fail to deliver sustained value. What is missing in how organisations approach them today?
Metadata and governance are widely acknowledged as important, but often quietly ignored. The reason is that manual curation does not scale and quickly becomes outdated.
They deliver value when automated and embedded into day-to-day delivery, so they stay current and actionable rather than becoming a compliance exercise.
Privacy is a good example of this. Solutions like eXate work precisely because privacy controls are built directly into data flows within the network, rather than being enforced afterwards. That kind of built-in control scales far better than policies or manual checks, and it is not something most organisations should try to engineer from scratch.
Future leadership
Q: Looking ahead, what capabilities beyond pure technical depth will define effective data and AI leadership over the next decade?
Future data and AI leaders will be defined less by technical depth and more by architectural and organisational fluency.
They will need to understand how data, AI, risk, and incentives interact, and be able to centralise what must be shared while decentralising what can vary. That balance will matter more than any individual technology choice.

