David Heinemeier Hansson’s Post

View profile for David Heinemeier Hansson

Co-owner & CTO of 37signals (Makers of Basecamp + HEY)

We finished pulling seven cloud apps, including HEY, out of AWS and onto our own hardware last summer. But it took until the end of that year for all the long-term contract commitments to end, so 2024 has been the first clean year of savings, and we've been pleasantly surprised that they've been even better than originally estimated. For 2024, we've brought the cloud bill down from the original $3.2 million/year run rate to $1.3 million. That's a saving of almost two million dollars per year for our setup! The reason it's more than our original estimate of $7 million over five years is that we got away with putting all the new hardware into our existing data center racks and power limits. The expenditure on all that new Dell hardware – about $700,000 in the end – was also entirely recouped during 2023 while the long-term commitments slowly rolled off. Think about that for a second. This is gear we expect to use for the next five, maybe even seven years! All paid off from savings accrued during the second half of 2023. Pretty sweet! But it's about to get sweeter still. The remaining $1.3 million we still spend on cloud services is all from AWS S3. While all our former cloud compute and managed database/search services were on one-year committed contracts, our file storage has been locked into a four(!!)-year contract since 2021, which doesn't expire until next summer. So that's when we plan to be out. We store almost 10 petabytes of data in S3 now. That includes a lot of super critical customer files, like for Basecamp and HEY, stored in duplicate via separate regions. We use a mixture of storage classes to get an optimized solution that weighs reliability, access, and cost. But it's still well over a million dollars to keep all this data there (and that's after the big long-term commitment discounts!). When we move out next summer, we'll be moving to a dual-DC Pure Storage setup, with a combined 18 petabytes of capacity. This setup will cost about the same as a year's worth of AWS S3 for the initial hardware. This brings our total projected savings from the combined cloud exit to well over ten million dollars over five years! While getting faster computers and much more storage. Now, as with all things cloud vs on-prem, it's never fully apples-to-apples. If you're entirely in the cloud, and have no existing data center racks, you'll pay to rent those as well (but you'll probably be shocked at how cheap it is compared to the cloud!). But it's still remarkable that we're able to reap savings of this magnitude from leaving the cloud. We've been out for just over a year now, and the team managing everything is still the same. There were no hidden dragons of additional workloads associated with the exit that required us to balloon the team, as some spectators speculated when we announced it https://lnkd.in/gHPpM35w

Ronald Kunenborg

Enterprise Data Architect, data expert, coach, data modeller and assessor

1y

This does show once again that Cloud isn't (or shouldn't be) about saving cost. If you stick to IaaS, at scale, things become expensive fast. However, you may not always be able to run your own data center. Energy constraints, cooling requirements, or the inability to access Internet at high capacity from certain locations, can limit that. PaaS is a different matter. Plenty of companies have zero interest in running their own data platform. But it comes at a cost such may still be worth it. And then we have SaaS. You usually either run that in the Cloud or you don't. And that's where the profits usually are found. For small companies the cost of maintaining expertise is usually more expensive than Cloud. For bigger companies it seems to be a trade off. That said, most companies are not IT companies with lots of in-house expertise. And those companies will not find many savings if they start operating their own data centres, even for IaaS.

Harry Veach

Executive Advisor | CTO | CISO | VP | girl Dad | amateur car mechanic

1y

I would definitely rethink your assumptions and predictions. You should never assume a five to seven year useful life on data center equipment. The critical systems will need to be upgraded every three years, four max. You also need to rethink your scalability assumptions. No more auto scaling up or down. And you will need to maintain at least 30% of free compute and storage not only in your main data center, but your secondary one. You will also need to do a risk assessment of your supply chain. If you can’t get equipment when you need it, that becomes a major risk. You also need to think about third party integration with vendors. It is at your fingertips in the cloud, but everything is a manual integration on prem. That extends your go to market time on projects. And if you own the data centers, do another assessment of the useful life of all power and cooling. Those upgrades are extremely expensive when they hit. I’ve moved from on-prem to cloud, and cloud to on-prem. You have to consider the full landscape, not just servers and storage. Just sharing my scars so you don’t run into snags in the future.

Marc Heil

Sr. Solutions Architect - Global Financial Services at Amazon Web Services (AWS)

1y

In this scenario, you have two data centers that you were planning on running indefinitely side by side with your cloud setup and you had enough free space and power to move into those DCs. Did you factor the price of running the data centers as a whole, and not just the applications? What was the cost of ownership for the DCs (power, insurance, taxes, security, etc…)?

Michael Leach

Reimagining CRM for the Agentic Era

1y

That’s a great picture of before and after costs. Thanks for the transparency. I don’t see depreciation and ops factored into the costs. How often will you refresh the $700k in DC hardware? What adddiitnal Ops roles were created, and at what salary to manage a fleet? While some AWS costs look higher than CapeEx on paper, I find serverless architecture tends to be lower over time when hardware and people are rolled into OpexEx costs.

Rob Pankow

co-Founder @ simplyblock | ship AI faster

1y

Curious why you chose pure storage? That locks you into their proprietary hardware as well. Choosing a software defined storage would keep you more flexible. How much of your data will be on ssd/nvme and how much on hdd? Given the $1.3m I assume most of it it’s cold on HDDs?

Mikael Knutsson

Co-founder & CEO of molnett

1y

The dragons and odd failure modes usually show up a few years down the line, once cruft has accumulated; disks purchased from the same manufacturing series start cascade failing across the data centre, temperatures start spiking oddly because the ball bearings in the fans were out of spec or the batteries on your raid controllers turns out to be faulty and you find out the first time your database server boots up after a power outage with a broken WAL. :) Sounds like you have purchased a small enough machine park to have a reasonable chance of being statistically lucky though!

Paul K.

Software Craftsman / Engineering Manager / Solution Architect / Insource - Outsourcer

1y

but are you files stored across the planet or in 1 dc what is there performance in say india vs a use dc if its not replicated automagically

Piers M.

Engineering at Orderful

1y

Is this inclusive of any personnel changes to manage this new owned infrastructure?

Bobby D.

Data Science and Technology Leader

1y

how easy was it to transition out of the way that files and objects were stored in s3 to your local on premises solution? did you use some s3 compatible storage mechanism or rolled your own?

Vidar Hokstad

Fractional CTO | DevOps & AI/ML Consulting | Software Architecture & Scaling Specialist

1y

Even if people don't want to go the route of their own racks (love doing that; it's much easier than people think, and all the skills can be bought by the hour), you can get similar magnitude price savings by renting dedicated servers. It's beyond me how few people actually look at their costs.

See more comments

To view or add a comment, sign in

Explore content categories