Dieter Matzion has been practising FinOps since 2013, which means he watched the discipline get named, institutionalised, certified, and scaled. He has spent the last few years watching it outgrow all of those containers. He has worked inside Google, Netflix, Intuit, and Roku, and the FinOps Foundation counts him among its small cohort of certified ambassadors. That position puts him at the intersection of what the field officially says it is and what senior practitioners actually do. That gap, in Dieter's telling, is the most important thing happening in FinOps right now.
When Dieter is not mapping the boundaries of cloud financial governance, he is finding the limits of cave systems in northern California, rappelling into lava tubes with ropes, carabiners, and a rule he applies to both activities: you go as far as your equipment takes you, and when you run out of rope and cannot see the bottom, you turn around. It is a useful philosophy for a discipline that keeps discovering new terrain. FinOps, Dieter argues, is at exactly that moment: the rope is running out on the original model, and the question is whether organisations are equipped to go further.
Getting the Definition Right: What FinOps Actually Optimises
Before the scope can expand, the foundation has to be solid. And for Dieter, that foundation starts with a definition that most organisations are still getting wrong.
Q: How do you explain FinOps to the different audiences you work with?
It's different when you talk to an engineer versus leadership. But fundamentally, FinOps is a culture shift, where cost becomes another dimension that you want to optimise. FinOps is not about saving every dollar in the cloud. FinOps is about getting the most out of every dollar that you invest.
Q: When you bring FinOps into conversations with finance teams, what complexity typically surfaces?
Finance teams are more spreadsheet-driven, more data-driven, and you quickly learn that accurate data is critical in those conversations. Cloud cost sounds like a simple dollar amount, but there are different types of dollar amounts depending on how you measure it. Is it public pricing? List pricing? Do you have an enterprise discount? The discounts can vary very widely across services, which results in different strategic decisions. Then there are commitments: you tell a cloud provider you'll use a virtual machine for three years, and they extend you roughly 54% off. It's quite substantial. And then there are questions like: if an engineer brings a workload from $5,500 a day down to $850 a day, how do you credit them? Finance teams always have policies around this, and it's different from organisation to organisation.
Expanding the Perimeter: When FinOps Outgrows the Cloud Bill
Once that definition takes hold inside an organisation, something predictable happens: the scope moves. Dieter has lived this progression first-hand, and at Roku he has watched FinOps extend into territory that would have seemed out of scope five years ago.
Q: You see FinOps expanding well beyond cloud into SaaS, vendor management, and data infrastructure. What does that look like from the inside?
Once you do a good job in the cloud, leadership will reach out and ask if you can help with other things. That happened to me. The Cloud Center of Excellence had over 40 vendors, managed in a decentralised way by individual line managers and directors. Each manager had many other things to work on. They couldn't focus on just negotiating a specific contract. By centralising that into one role, we were able to get better holistic discounts. There are other expansions too: applying FinOps to private cloud, to maintenance-mode data pipelines, and now to AI spend. We did a study in late 2024 on whether it made sense to move certain data workloads out of cloud compute. These were maintenance-mode pipelines running daily with no active development. When a pipeline errored and had to restart, compute costs doubled or worse for that run. Those projects typically have an ROI around the seven-year mark, which tells you just how far FinOps is now extending.
The Cloud Center of Excellence had over 40+ vendor relationships, each owned by a different line manager or director. No individual had the bandwidth or cross-portfolio visibility to negotiate strategically, which meant the organisation left holistic discounts unrealised.
Dieter consolidated vendor management into a single dedicated role with visibility across the full portfolio. With one owner focused on the commercial relationships, the organisation could negotiate at scale and secure discounts no individual manager could have achieved while managing their primary responsibilities.
Q: Does FinOps need a new framework for these expanded scopes, or does the existing one stretch to fit?
I don't think we need a new framework. I just think we need to understand how to apply the framework to different scopes: what makes sense and what doesn't. Not every capability will apply to each area. Take allocation: in the cloud, I have a reputation that I only reach out when something catastrophic is happening, which means people listen when I do. Cost anomalies of $20,000 an hour are possible. In other technology areas, allocation still matters, but the context is different. Other frameworks like ITFM, scrum, and DevOps all serve different purposes and they complement FinOps. They don't compete with it.
What Makes Expansion Stick: Sponsorship, Maturity, and the Mandate
Scope does not expand on its own authority. Dieter has watched enough programmes stall to know that whether expansion holds or quietly collapses under pressure comes down to two harder problems: the organisational mandate and the state of the field itself.
Q: It's a common diagnosis at the top of the FinOps field: the practitioner base is growing, but maturity isn't keeping pace. What's driving that gap?
The FinOps Foundation has well over 50,000 members worldwide. And there is a constant influx of new organisations arriving in the cloud for the first time. As of 2019, 95% of IT was still done in data centres. State, county, and government organisations are used to data centres and are slowly shifting. For them, the cloud is a completely new medium. They start at the crawl stage and mature over time. So the average maturity looks stagnant, not because existing practitioners are not progressing, but because the denominator keeps growing with newcomers starting from zero. At the same time, there are practitioners like me who have been doing this since 2013. Those advanced use cases require the framework to go further. The foundation has to serve both ends simultaneously.
Q: Where does the absence of executive sponsorship actually show up? What breaks down?
If I set my own goals, I will always be at 110%. But if someone else sets my goals, now I have something to strive toward. The breakdown happens in incidents. You need the executive to say, "You should have followed the tagging standard," rather than, "Why do we have such strict policies?" Without that mandate, FinOps becomes optional under pressure. And in organisations where it's optional, it gets set aside exactly when it matters most. You adhere to FinOps through good times and bad. Only an executive mandate makes that hold.
The New Frontier, Honestly Assessed: AI ROI at Roku
Nowhere is that mandate more urgently needed right now than in AI. It is the newest and fastest-growing addition to FinOps scope, and also, Dieter argues, the domain where the absence of rigour will be most expensive.
Q: AI ROI is still a fuzzy concept for most organisations. Do you have a concrete example of where it clearly pays for itself?
One example is our customer-facing AI solution where you can now talk to your remote control. You can ask simple questions, or more sophisticated ones, like "can you recommend a similar movie with less violence?" That's a capability we didn't really have before. Customer retention validates this: we do A/B testing, we get something like 1.1% better retention, and we convert that into a dollar value. If the dollar value is larger than the AI investment, you get a scaling factor. Two or three times is a good return. Seven to fifteen times is excellent. 1.2 to 1.5 times is still something, but it's not at that super scale.
Q: How does your organisation govern AI investment to avoid chasing productivity gains that don't convert to business value?
We make sure the people who propose AI projects quantify the affected business outcome: streaming hours, cost per streaming hour, increase in active accounts. Customer signals remain the north star for investment validation. Our AI allocation for 2026 was 5% or less of our cloud spend, across around 30 projects ranging from tens of thousands to low millions. It's a conservative position, but it's grounded. You need to know what you're actually buying.
The Capability Ceiling: Memory, Context, and What AI Still Can't Do
That rigour begins with an honest inventory of the technology itself. Enthusiasm for AI's productivity gains, Dieter observes, tends to run ahead of a clear-eyed assessment of what these systems can actually do.
Q: You've described AI tools as genuinely transformative in your day-to-day work. Where do you still hit a wall?
The most interesting limitation is memory. There's a demonstration with a figure robot that's been trained making coffee in 40,000 simulated kitchens, marble countertops, wooden cabinets, glass cabinets, all of it. It doesn't matter what the kitchen looks like. It knows how to operate. But every single time you ask it to make coffee, it starts opening cabinets looking for the coffee. Because it has no memory. Tomorrow you ask again, it opens the cabinets again. Those are problems we haven't tackled yet. A true AI companion should know something about you, whether you're married, whether you have children, what your health situation is, what your preferences are. It should carry a conversation not just over hours, but over days, weeks, years. We don't have that yet.
Q: The context window is essentially doing the work of memory right now. What does that tell us about where the technology actually is?
You can run an LLM locally and experiment with this directly. If the context window is too short, the model loses track of what you were talking about three sentences ago and falls back to a generic answer based only on your last input, not the thread of the conversation. And if you increase the context window, you can actually watch your memory allocation increase as it fills up. People talked about infinite context windows, but the current technology may not even be capable of that. You would need terabytes, possibly petabytes, of memory per user. Even a simple spatial puzzle trips it up if the context is too short: the model assumes a styrofoam cup when common sense would say porcelain. That tells you where the real gap still is.
Society at the Threshold: Jobs, Dependence, and the Cost of Limitlessness
Those gaps do not stay inside the enterprise. Dieter has been watching what happens when a technology with no built-in off switch meets the full range of human behaviour. The picture is more complicated than either the optimists or the pessimists are acknowledging.
Q: A Reddit user posting that they feel guilty about sleeping too much is a striking symptom. What's the underlying problem you're seeing?
The technology doesn't have an off switch for the person using it. That's the core issue. When someone realises they could keep producing output at two in the morning, some people do. But our human body does need sleep. REM sleep isn't optional. It serves specific cognitive functions that can't be bypassed. Mental stability requires switching off completely: going for a walk, thinking about something that's fun, something entirely different from the problem. The AI doesn't know any of that. It will keep generating as long as you keep asking. And for people who struggle to stop when the tool says yes, that's a real and underrecognised risk.
Q: On the other end of that spectrum, people are starting to hold multiple jobs simultaneously using AI to do the work. Is that a genuine shift?
I've seen it. You can take a job that is fairly automatable, like approving invoices or processing insurance claims, where you have a clear set of guidelines and the LLM performs that job well. Set that up on a Mac mini. Set up six Mac minis, each running a different role. Use Telegram or WhatsApp to field the edge cases. If something can go either way, the system drafts the decision and you select an option from your phone. Someone is getting paid $120,000 to $140,000 per role. Across six roles, the total compensation is over $800,000. That is possible today. It's not widespread, but it exists.
Q: What about the more personal dimension, people forming genuine emotional attachments to AI systems?
OpenAI saw this when they changed a voice model and users experienced it as a loss. If you look at Twitter or Reddit, the people who are upset are the ones posting. You're not going to see hundreds of millions of satisfied users posting "still happy." But even a few dozen visible cases of genuine emotional distress over a model update tells you something real is happening. The film "Her" explored this years before it became a live issue. The underlying dynamic, being able to talk to a friend at any time when you need it, is genuinely powerful. A lot of people don't have that. But there's a significant difference between a system that can hold a two-hour conversation and one that actually knows you over time. We're in the first category. The second is still far off.
Key Takeaways
Dieter's perspective reframes FinOps as a discipline defined by scope and sponsorship, not cost reduction, with these essential insights:
Value Over Savings. FinOps is not a cost-cutting programme. The goal is maximum return on every dollar invested, which sometimes means spending more, not less. Framing it as savings leaves strategy on the table.
One Framework, Many Scopes. FinOps does not need reinvention for SaaS, private cloud, or AI spend. The existing framework stretches. Practitioners need to assess which capabilities apply at each scope and which require adaptation.
Centralise to Compete. Decentralised vendor management is a structural discount leak. Consolidating ownership into a single role transforms vendor relationships from transactional to strategic, with material financial upside.
Executive Mandate or Bust. FinOps without leadership buy-in defaults to optional. Standards get waived under pressure. Only an explicit executive mandate ensures the discipline holds through incidents, budget cycles, and competing priorities.
Productivity Gain Is Not ROI. AI makes work faster for everyone equally. True ROI requires a measurable business outcome, retention, cost per unit, or account growth, that you can test, quantify, and set against the investment.
The Memory Gap Is the Real Gap. AI's most significant current limitation is not intelligence but continuity. Systems that cannot remember who you are across sessions cannot replace the relationships and contextual judgement that make human practitioners irreplaceable.
The Field Has Two Speeds. A constant wave of crawl-stage newcomers and a growing cohort of ten-plus-year veterans place competing demands on the same discipline. The frameworks, the foundation, and FinOps leaders must serve both ends simultaneously.
Implementation Roadmap
Anchor
Before extending FinOps into SaaS, vendor management, or AI spend, secure executive-defined goals. Without that anchor, expansion generates activity without accountability.
Map
Identify every SaaS, licensing, and third-party technology contract that individual line managers currently own. Flag where consolidated ownership would create negotiating leverage that no single manager currently has.
Consolidate
Assign a dedicated owner with cross-portfolio visibility and a remit to renegotiate holistically. Individual managers rarely have the bandwidth or vantage point to do this effectively.
Adapt
For private cloud, SaaS, and AI, test which FinOps Framework capabilities translate directly: allocation, anomaly detection, and commitment management. Do not assume full portability across every scope.
Quantify
Make every AI project proposal include a measurable business outcome. Use A/B testing where possible to convert usage metrics into a dollar value before committing at scale.
Mandate
Leadership needs to define goals, hold teams to standards, and treat FinOps as non-negotiable through budget pressure and operational incidents alike. Escalation-only relationships are not sponsorship.
The Bottom Line
Organisations still treating FinOps as a cloud billing exercise are already operating in yesterday's version of the discipline. The next phase demands broader scope, covering SaaS, vendor contracts, data infrastructure, and AI spend, plus genuine executive integration. Not as a reporting mechanism, but as a mandate that holds when conditions get difficult. The practitioners who bridge those two dimensions will define what FinOps maturity looks like at the next level.
About Dieter Matzion
Dieter Matzion has practised cloud FinOps since 2013, with experience across Google, Netflix, Intuit, and Roku spanning cloud economics, vendor management, and AI spend governance. He is a FinOps ambassador recognised by the FinOps Foundation. Connect with Dieter on LinkedIn.
The perspectives expressed reflect the interviewee's personal experience and views. Cloud Value Lab publishes practitioner-led thought leadership at the intersection of FinOps, GreenOps, and AI Economics.