Enterprise on-prem, a thing of the past?
My personal and professional view of enterprise on-prem datacenters in 2026 and onwards.
Tero
1302  5 Minutes, 55 Seconds
2026-03-05 19:06 +0200
My background on the subject
I have had the priviledge of being involved in multiple datacenters refreshes throughout my career. I was introduced to Azure very early on in my career by our ISP at the time. The introduction happened just about a year before our datacenter at the time was due for a renewal. Spoiler alert: we still run on-prem in 2026.
Much like nearly every established and long-running company, we carry the burden of legacy software and I dare to say that we carry more than the national average since we’re operating in the most legacy of all business sectors, automotive. OEM’s are notorious for outdated systems and strictly requiring dealers to carry using certified systems - like ERP’s and diagnostic equipment.
I have been a part of decision making process for two datacenter renewals and as of 2026 I am once again facing the decision of cloud vs. on-prem - this is why I decided this would be a great first blog post.
Deciding factors for cloud vs on-prem
Hardware pricing for on-prem
Let’s face it, businesses where IT is a supporting function, C-levels and owners do not have comprehensive technological know-how about servers and other infrastructure. And they don’t need to have, that’s why they hire IT professionals. Nowadays I dare to say, most of them understand the importance of IT in running their business, but frankly deep down IT is just another cost on top of others.
Hardware prices have skyrocketed from 2025 onwards, driven by AI development. We have seen huge spikes in hardware pricing in the history: most of us remember the GPU price spike fueled by crypto mining operations which then ‘conveniently’ led to GPU being needed for AI as well. Being real, GPU prices do not affect a lot of businesses since GPU computing power is not needed for a lot of businesses worldwide. Of course there are exemptions, but a huge portion of the world’s business does not need GPU for their operations.
Now, in 2026, the price hike hits home for a lot more people than before. Storage and RAM are up hundreds of percents and vendors are constantly announcing new, even higher prices. We, at least in the western hemisphere, are living in a capitalist world and this is exactly how that world operates. Supply and demand. Manufacturers are starting to sell scarcity, Western Digital set a prime example: their CEO announced that they have sold their two-year production upfront. More outcomes like this to follow for sure.
Reliability and availability
When I was introduced to Azure over 10 years ago, the main selling point was the reliability and high availability of the cloud. I clearly remember one of the presenters telling how “you will not have to worry about server outages any more”.
This, to my belief, stayed pretty much true for a number of years. After I was introduced to Azure, we did decide to stay on-prem because of Azure would’ve increased our IT operations budget by a lot. Avoiding concrete numbers here, but a truly optimized Azure subscription would’ve cost us over three times more in the span of 5 years than buying our own hardware. This gap wasn’t worth it to have a bit more reliable IT infrastructure, since our on-prem DC was pretty much dialed in reliability-wise.
Don’t get me wrong, I like AI but I love humans. Most of us IT professionals have remarked the rise in public cloud outages during the last few years. Seems like not a fiscal quarter goes by nowadays without a massive outage in AWS, GCP or Azure. The way I see it is that the implementation of AI by public cloud operators like Amazon, Google and Microsoft is the culprit here. We’ve seen massive layoffs from the three mentioned before and as I understand it, the layoffs have taken a focus on the senior level engineers, leaving the low-salaried junior running the show. This is very well in line with the capitalist world we live in: why would a company pay a senior when an AI assisted junior can be in the same ballpark-performance levels as the senior? And if they can’t perform, what’s the customer going to do? Switch their whole cloud infra over to another provider? They know that the majority of their customers wouldn’t even think about switching platforms because a) it’s a very labour intensive job requiring a lot of seasoned engineers to migrate b) the clients know that they would not actually improve their QoS by switching.
I listened to a great speech years ago, where an entrepreneur explained why they want to pay the seniors more than the newcomers. They said that they have people who have worked for them for anywhere between 6 months to a year asking for a raise because “they do all the same jobs the seniors do”. Their answer was always the same:"I acknowledge that you can perform the same tasks our seniors can. The difference between you, a junior 6 months in, and a senior with 10 years experience in is that I don’t know if you can navigate a storm. I know my seniors know how to sail any storm and they have proven to me they can. This is exactly why I pay for my insurance: I don’t want my house to burn down, but if it does, I can be thankful of paying for that insurance".
The three biggest cloud providers don’t need to pay for their “insurance” any more. They have become what the world knows as “too big to fail”, reasons listed above. They can degrade their QoS even further if needed by cutting their workforce but they still will not lose more than a handful of customers.
David Linthicum wrote an great article for Infoworld about this decline of reliability in cloud services: Why cloud outages are becoming normal.
The decision
Listed above are the two major factors in my decision: do I pour a wheelbarrow of money for our own hardware or start signing off monthly cloud bills at the hightened risk of an outage where I cannot play a part in resolving?
My calculations proved that the first five year period would cost about the same either way. This is a major shift and shows how much hardware prices have gone up: when I was deciding on on-prem vs Azure five years back, Azure was considerably more expensive. The first five years in Azure would also contain a lot of one-offs, like building governance models for unified deployments and security - so the next five year period would be even cheaper than buying our own hardware.
After an intensive thought process, I recommended on buying new on-prem hardware and not to migrate to Azure (or any other cloud platform). It’s true that running our own servers does come with drawbacks, as does migrating to Azure - that’s a given. But weighing the drawbacks between the two choices, on-prem won once again.
Lifting and shifting our infrastructure to Azure would in essence be handing the keys of our business’s operational reliability to Microsoft. A world-wide outage could knock us offline for hours, days or even weeks. The projected risk of that alone is more than enough to close the gap in the costs. Of course our own DC can suffer an outage as well but that’s why we have internal DR processes. A wide spread global outage of Azure services would mean that our sites would send employees home to wait out the outage, with pay.
The conclusion
I don’t mean for this blog post to be a guide or something for someone else to base their business decisions on. This is written purely from my and our company’s viewpoint.
Cloud should be considered an option, not a default.