Ask most cloud governance leaders what their job is, and they will talk about policies, mandates, and the authority to say "no". Ask Naval Kush, and he will tell you he is a trust enabler.

Naval leads the Cloud Governance house at Amdocs, where he runs more than a hundred policies inside their Cloud Center of Excellence. Fourteen years in the industry, the last six at Amdocs across cloud governance, cost, and security compliance, FinOps, presales, and wider community building. He cofounded the Pune FinOps Community before the FinOps Foundation scaled in India, and now helps boost the global wave of practitioners exporting FinOps from Indian cities back to the world.

When he is not running governance, he rides his Royal Enfield Meteor cruiser motorbike. Long, solo trips on a motorcycle across India. Whether it is the great-himalayan terrain of Leh-Ladakh, or a twenty-four hours straight ride from Pune to the Rann of Kutch, or the southernmost Indian beach town of Kanyakumari, he rides on. The wind, the lanes, the guardrails on the bend before a drop. Structure that enables forward motion, not structure that stops it. Another side of his community engagement, where he also raises funds together with the Distinguished Gentleman's Ride (DGR) for causes like prostate cancer and mental health for men, worldwide. He is passionate about creating a positive impact.

That is also Naval's argument about cloud governance. Done well, it looks like the road, not the wall.

"Governance is not control anymore, but a guardrail. Built to serve, not to rule."

Running Governance Like a Public Service: Why Policies Are Guardrails, Not Gates


The default mental model in most cloud organisations treats governance as a control function. Policies are issued, exceptions are filed, and the teams decide what ships. The friction is so familiar that engineers stop questioning it, and most governance leaders stop hearing the cost. Naval rejects the model outright.

A guardrail along an open road, signalling structure that enables forward motion
Done well, governance looks like the road. Not the wall.

Q: Walk us through how you actually run governance day-to-day.

Governance should not be an innovation blocker. You create policies, but you cannot keep them active always. Sometimes you turn them off. Sometimes you cancel or retract them altogether. You cannot be stiff with the same governance model. You cannot stick to it for four or five years. When people ask me what my five-year plan is, I say my model should not be the same for the next year. You need cadences and forums. As simple as a Yammer community, a form survey, or an email signature that invites feedback. Flexibility comes from a collective voice. To my stakeholders, I am just an adviser. I am a public servant.

Q: Your roadmap sounds almost like a development pipeline. Who are your customers?

My customers are my internal stakeholders. Heads of engineering, product leaders, operations leaders, information security, procurement teams, and even internal audit. Everyone comes with their own bucket list. The auditor wants the safest environment. The developer wants the most flexible environment. The leader wants the most optimised environment. Everyone has a different need. And you cannot keep working in parallel to the company's North Star. You have to merge both lines together so the road map caters to what the North Star is.

Q: Engineering teams expect governance to push back when they ask for exceptions. How do you handle that?

In the world of cloud, you cannot say no, because every need is real. If an engineering head asks for something, you cannot expect them to be a naive person coming with a random ask. They have been in the industry the same number of years as you, even more. If you start questioning them, you are creating your own enemies. Governance should always act as an enabler, not a controller. You have to stop being a controller and become a good cop.

"The core of governance is that FinOps starts before the cloud account exists, when the architectures are being visualized."

How Cost Avoidance Beats Cost Optimisation


Most FinOps programmes inherit their cloud bills as a fact. The account is spun, the resources are running, and the dashboard is reporting. Naval thinks there is an earlier moment that matters more, and some teams miss it entirely.

A hand annotating an architecture blueprint with a red pencil before construction begins
The cheapest cost is the one you avoid before the account spins up.

Q: Walk us through what happens when an engineering team needs a new cloud account.

A mature Cloud Center of Excellence must have its own automations created for everything. Landing zone, policies, SCPs, tagging, all attached to an account from the moment it spins up. And every engineering team that needs a cloud account submits an architecture for review. FinOps reviews it. Security reviews it. CCoE architects review it. And that's not gatekeeping. You do it for their own benefit. If you can avoid the cost at the beginning, that is FinOps. People treat it as an approval phase, but it is more of a proactive intelligence layer that takes much of the load off the FinOps team.

Q: Most organisations frame FinOps as an optimisation function. How do you frame it?

FinOps is not only an optimisation framework. It is a practice that enables you right from the start. Visibility, transparency, and then later, you come to optimisation, ownership, and accountability. Those concepts come in later. First is visibility. If you don't understand how much your cloud infrastructure is going to cost you, you are definitely starting wrong.

The Problem

Engineering teams ask for exceptions. A week on medium or large EC2s to validate a workload. A quick AMD or Graviton migration test. A GPU run for an experiment. The default governance answer is no. The exception process drags. Engineering velocity collapses, and the FinOps function becomes the villain.

The Solution

Naval suggests building playground (aka sandbox) environment automations that carry a limited budget and a limited time frame. Two, three, four months, and the account auto-expires. Engineers experiment in the playground, not in dev, pre-prod, or production. There is no red alert in the FinOps dashboard. No overdue usage. No over-budget exception. When the time runs out, the account closes itself. If the team needs more, they request another playground. The conversation about saying no is replaced with a conversation about how long the experiment needs to run.

"FinOps is not a component. It's a driver."

When AI Both Inflates the Bill and Audits It: Where Humans Still Own the Math


AI is the largest cost driver hitting cloud bills since the cloud itself. It is also the most plausible candidate to automate FinOps work that previously required teams of analysts. The trouble is that the same systems doing the inflation are not yet trustworthy enough to do the audit. Naval's view is sharper than most: AI cannot audit what it cannot trust, and that trust has to be built into the data long before the model sees it.

A balance scale weighing a printed spreadsheet against a silicon chip, signalling human verification of AI output
AI compiles. Humans still weigh the math.

Q: Cloud costs are climbing fast in the AI era. What is actually driving the inflation?

AI data centres are consuming natural resources at a scale most organisations have not priced in. The economics are shifting globally. Wars, natural disasters, resource limits, chip rationing. On top of that, every AI vendor is offering tokens for free. Free is not going to last. There was a survey on Copilot enterprise licensing where 94 per cent of people said they would not pay for it. Six per cent only paid. That cost has to land somewhere. It will inflate the cloud bill. AI is not essential right now, but kids are doing homework with it. Reliance is increasing in domestic homes too. Inflation will hit both.

Q: Where does AI deliver value in FinOps today, and where does it still fall short?

AI is essential for governance and FinOps. But it may still hallucinate. The recommendation can be skewed. The numbers can be wrong. Sometimes it cooks up data. The core of FinOps should still be done by humans. AI assistance is for compiling reports and creating dashboards, but only on data already validated and verified. If you are an enterprise with six years of cloud usage data, your own trends per business unit, per product, per team, give that to AI. If you are a startup with four months of data expecting AI to forecast and detect anomalies, you are wrong. Hallucination will happen. Unless you have a large volume of trusted data, everything is vague.

Q: You said earlier that even when AI builds the system, ownership stays a human problem. Can you unpack that?

A mechanic can spin up a SaaS tool in his garage. It might look enterprise-grade. Does he understand every feature, every piece of code? No. Can he legally own the code? Never. So even though he created it, he is not the owner. The same is true for organisations creating code with AI. You ultimately need a human to do things. You cannot eradicate people. And connecting it back to governance and FinOps, even if Claude or GPT does something, the cost impact is still there. To do the math, you need people. I still don't trust AI to do my math.

"I trust me. I don't trust them."

Building the Platform You Can Trust: Why Naval Bets on the Team Over the Vendor


The FinOps tooling market has slowly started consolidating. Pileus rolled into Anodot, then into Umbrella. Flexera acquired ProsperOps. Two hundred vendors at last count, half of them positioning as AI-first. The question facing every governance leader is no longer which tool, but build, buy, or wait for the next acquisition. Naval has a clear answer.

A craftsman shaping a wooden joint by hand, signalling in-house build over vendor lock-in
A platform you trust is a platform you keep.

Q: How do you read the FinOps vendor landscape today?

I have observed more than 200 vendors and met 100-plus of them at FinOps X events. They are growing like mushrooms in the FinOps tech-forest. Not all sustain for long. A lot of companies are now into AI, selling themselves as AI-first FinOps platforms. I have spoken to at least 20 of them, all pitching the same feature and each saying, "I'm the leader of the space." Some are not even into AI yet. The market grows as cloud adoption grows, but some will go off the radar. That is going to happen.

Q: Where do you see the market heading, and where do the hyperscalers fit?

Companies will keep acquiring. The sharks will eat the smaller fish. That is how the ocean works. That is the reality of the ocean. One challenge I want to highlight is that the CSPs themselves are now releasing their own capabilities on the FinOps side. AWS and Azure have their own MCP service. If you are smart enough to build your own tools, the build-versus-buy question gets sharper.

Q: So what is your stance on build versus buy in this consolidating market?

I will not buy something. I will build something. Not because building is cheaper, but because I know my capabilities. I know my competence. I trust my team. A vendor may vanish or change their vision tomorrow. Their capabilities may degrade or become a misfit. If I build, I have a long track record of building successful products. It is not about cost. It is about trust. I trust me. I don't trust them, at least yet. I am also cautious about smaller startup vendors. They may not have the staffing, competence, or industry exposure of large enterprises. Both will exist. It is a per-organisation decision.

Key Takeaways


Naval's perspective challenges the default model of cloud governance with these essential insights:

Public Servant Stance. Governance is not control. It is a guardrail engineered to enable forward motion, not block it. The leader who treats every policy as a mandate creates blockers among the processes that the policy was meant to protect.

Roadmap as Service. A governance roadmap built from internal customer voices outperforms one built from frameworks alone. Five-year plans are obsolete. The model has to evolve quarterly or half-yearly.

Cost Avoidance Beats Cost Saving. The highest-leverage moment in FinOps is before the cloud account exists. Architecture review at creation, automated landing zones, and guardrails deliver more savings than any post-hoc optimisation report.

Visibility First. FinOps is a culture that begins with visibility, then transparency. Optimisation, ownership, and accountability come later. If cost is invisible at the start, the function has already failed.

AI Without Trust in Data Hallucinates. AI is useful for compiling and dashboarding, but only on data validated and verified by humans. This is the prerequisite, not the optional upgrade.

Build Where You Trust the Team. The vendor market is consolidating. Hyperscalers are building native FinOps capabilities. Long-term lock-in is a bet on stability that may not exist. The team you trust is the platform you can keep.

Implementation Roadmap


1

Map

Identify the internal customers of governance: heads of engineering, ops leaders, product, internal audit, security, procurement, finance, and more. Build a forum where each can raise issues. The roadmap is built from their bucket lists, not from frameworks.

2

Move

Move FinOps upstream of account creation. Make architecture review the default. Tag policies, guardrails, guidelines, and landing zones onto every new account at creation. Cost avoidance happens before the account exists, not after the bill lands.

3

Build

Build playground environments. Give engineers a time-boxed, budget-limited account to experiment in. Auto-expire at three or four months. Replace the conversation about saying no with a conversation about how long the experiment needs to run.

4

Set

Set the data foundation before the AI layer. FOCUS-compliant, validated, verified data is the prerequisite for any AI-assisted FinOps function. Without it, AI assistance hallucinates, and the function loses trust faster than it earns it.

5

Treat

Treat vendors as short-term partners, not platforms. Identify a specialised gap. Buy capability for one or two years. Build the equivalent in-house. Move on. Long-term lock-in is the failure mode that the consolidating market keeps producing.

The Bottom Line


The next era of cloud governance will be decided by leaders who can keep cost, security, operations, and automation moving in the same direction without grinding engineering velocity to a halt. Naval's bet is that the road has guardrails, not gates. The leaders who internalise that distinction will be the ones engineering teams still listen to in three years.

About Naval Kush


Naval Kush is Cloud Governance Manager at Amdocs, where he leads governance, FinOps, and operations across the cloud center of excellence. He has fourteen years in the industry, the last six at Amdocs, spanning cloud governance, cost and security compliance, FinOps, presales, and wider community building. He cofounded the Pune FinOps Community ahead of the FinOps Foundation's India chapter and remains an active voice in the global practitioner community while partnering with fellow practitioners and domain leaders. Connect with Naval on LinkedIn.

Cloud Value Lab publishes practitioner-led thought leadership at the intersection of FinOps, GreenOps, and AI Economics. If you are a practitioner or subject matter expert interested in sharing your perspective, reach out to David May.