PostHole
Compose Login
You are browsing us.zone2 in read-only mode. Log in to participate.
rss-bridge 2026-01-29T17:00:00+00:00

Scaling enterprise AI: lessons in governance and operating models from IBM

Successful implementation and scaling of enterprise AI projects is fundamentally a people and operating model challenge, not just a technology problem.


January 29, 2026

Scaling enterprise AI: lessons in governance and operating models from IBM

Successful implementation and scaling of enterprise AI projects is fundamentally a people and operating model challenge, not just a technology problem.

  • Credit: Alexandra Francis*

Key takeaways

  • Successful AI implementation at the enterprise level requires balancing timely innovation and experimentation with governance, security, and trust.
  • Successful implementation and scaling of enterprise AI projects is fundamentally a people and operating model challenge, not just a technology problem.
  • IBM's internal "AI license to drive" certification model, which ensures that employees understand data privacy, security, and enterprise integration before building AI agents, lets the enterprise scale AI responsibly.
  • In IBM’s experience, hybrid or “AI fusion” teams that combine business function experts with IT technologists are collapsing traditional handoffs and accelerating value delivery by putting domain knowledge directly into the development process.

The innovation-risk paradox in AI deployment

Every enterprise navigating the AI landscape faces the same question: How do you move fast enough to capture AI’s value without wasting time and money, annoying developers and customers, and introducing potentially catastrophic risk? It’s a paradox that keeps CIOs awake at night.

Matt Lyteson, CIO of Technology Platform Transformation at IBM, is well-aquainted with this challenge. Managing AI deployment for 280,000 employees at a company where AI is core to the business strategy has taught him that enterprise AI isn't primarily a technology problem. It's a people problem. An operating model problem. And, increasingly, a C-suite concern.

"We need to be cautious," Lyteson warns. "A lot of CIOs like myself still have a little bit of anxiety and stress over what happened in the early days of cloud computing, where everyone somehow found a way to get access to a cloud account, and now we're 10, 15, 20 years later, still cleaning some of those things up."

Speed without structure creates technical debt and inefficiencies that clog organizations for decades. But heavy-handed control over who can access which tools smothers innovation.

Beyond traditional IT: Why AI requires a new operating model

The conventional approach to developing enterprise technology—centralized IT teams building solutions for business units—is beginning to dissolve as the scope of AI’s capabilities expands. Simply put, business leaders see what AI can do, and they're not willing to wait for IT to get around to their use case when the AI sandbox is right there.

That puts a new face on a familiar problem: shadow IT, but for the AI era. Employees experiment with widely used tools like ChatGPT and Claude, often plugging in corporate data without considering or fully appreciating the implications. Well-meaning teams build agents that access sensitive systems without proper security reviews. Innovation accelerates, sure, but so does risk exposure.

The skills gap compounds the challenge. IT organizations haven't historically hired people who deeply understand business workflows. "We say, 'Jody, I need you to run this procurement system,'" Lyteson explains. "And maybe you'll synthetically absorb what procurement actually does over a period of time." In contrast, Lyteson says, “Internal IT organizations traditionally have been a little bit different. And especially with the agile transformation that we all went through a few years back, it was really focusing on the engineering and I would say more on the listening skills rather than appreciating how the function operates. That's got to change.”

Meanwhile, business function experts who understand workflows on an intimate level often lack the technical skills to build solutions themselves. The handoff between these groups—business defines requirements, IT builds solutions—becomes a bottleneck that prevents enterprises from moving at speed.

Integrating governance into the tech stack

Most enterprises treat AI governance as a control mechanism, not an enablement framework. They create review boards, define approval processes, and implement compliance checkpoints that turn projects into ordeals. Innovation grinds to a halt, and teams sour on AI tools generally.

IBM wanted to take a different approach: enabling rapid experimentation while maintaining enterprise-grade security, data privacy, and risk management. To make it a reality, they reimagined the entire workflow from idea to production.

"We literally went to a two-week process of doing all this back and forth with the business case to now, in about five or six minutes, you can have an entire environment provisioned on what we call our enterprise AI platform in order to build your thing," Lyteson says. "We've connected all the necessary data privacy, AI ethics reviews with the right information [to] really streamline this process."

It wasn’t about eliminating governance, but embedding it into the platform itself. Instead of a series of review processes that create delays, IBM’s enterprise AI platform automates compliance checks, connects to approved data sources, and provisions secure environments instantly. Governance is less visible red tape and more invisible infrastructure.

This matters at the board level. When boards and investors ask about AI risk exposure, CIOs need answers. What AI agents are running? What data do they access? How are they secured? A platform approach makes these questions answerable. An ad-hoc approach makes them alarming.

The AI license to drive: IBM’s framework for responsible AI scaling

In their effort to balance speed, innovation, and accessibility against the risks, IBM developed a new mechanism for governance: the AI license to drive. The idea is that just as you need a driver’s license to operate a vehicle, you need certification to build and deploy AI agents on enterprise infrastructure.

"We developed what we call an AI license to drive," Lyteson explained. "Understanding that, yes, of course in a technology company…we've got a lot of people that like to play around with tech. But it doesn't make sense that where you align on the organizational chart dictates whether you can do that or not."

The framework certifies that builders working with AI agents understand data privacy principles, information security protocols, and how to connect to backend enterprise systems without causing outages. It's not about limiting who can build; it's about ensuring that everyone builds responsibly.

This solves multiple problems simultaneously. It prevents the headaches that ensue when someone builds a critical agent and then tells IT, "I don't have the skills or resources to maintain this going forward. Can you take it over?" It reduces data leakage risks. It ensures consistent security practices. And, critically, it democratizes AI development beyond traditional IT boundaries.

As Lyteson said, where you sit on the org chart shouldn’t place limits on how you can contribute to organizational success. The license to drive concept recognizes that organizational structure shouldn't dictate capability. A procurement expert who understands the workflow intimately and gets certified should be empowered to build, even if they're not in the IT department. This mindset shift fundamentally changes how enterprises approach AI development.

Implementing AI fusion teams to collapse the value chain

[...]


Original source

Reply