I think that this is a textbook “build vs. buy” architectural problem. At a small scale, it’s so attractive to build: reasonable, affordable-at-the-moment startup costs with (hopefully) only time costs to maintain the bespoke solution for a predetermined period.
It’s a wager:
Will my startup costs plus my expected ongoing expenses be less than [effectively] hiring someone to do it for me by buying or renting their product and whatever support contract, explicit or implicit, comes with that?
but with an extra caveat because of the nature of purchasing a subscription service: the absence of resale.
What if the cloud provider drops the cost of the service, either directly or by introducing a more powerful configuration at the same price? You could build a new server yourself for additional capital, put in the labor to manage the transition ahead of schedule. You’ve now got the same power but you’ve had to spend time migrating, and now you’ve got yesterday’s hardware to manage, too.
It probably makes sense to build when you’re:
reselling what you create.
learning in the process.
building for private use at a scale competing with a cloud computing offering.
able to amortize the startup costs, thereby getting closer in recurring cost to the rental price of the service solution while freeing up capital for other things.
That last point is where building really makes sense, but you have to have the credit at the start and address the resale problem once the startup cost is paid off.
What I’m getting at is that costs are an exercise is understanding variables and establishing trade-offs. Atwood inadequately advises this analysis in this post.
There’s no denying that spinning servers up in the cloud offers unparalleled flexibility and redundancy. But if you do have need for dedicated computing resources over a period of years, then building your own small personal cloud, with machines you actually own, is not only one third the cost but also … kinda cool?
The “coolness” factor here is attractive for people who want to learn systems administration.
Maybe I’m burned out on it from doing it for so long that I appreciate cloud computing because it empowers me to focus on the creation of software and systems that don’t objectively care about the underlying hardware. I ran a server of my own, first on a VPS and then a bare metal dedicated server, where I administered a web server, mail servers, database servers, IRC bouncers, and more, and over time, it became this massive distraction when I had to fix things myself. Mail was a tremendous pain in the ass. When the hard drive died with ~ ten years of data on it, I learned the hard way the importance of backup verification when I discovered that the most recent backup was two years before the failure. For me, the tradeoff wasn’t worth it anymore. It was cheaper in every way except currency to rent a solution through paid email hosting; free or low-cost application hosting – Heroku, GitHub Pages, etc. didn’t exist when I built my first server on that VPS; and paid IRC hosting, which is now irrelevant to me but I use a free web-based bouncer-ish service on-demand. I never read the scrollback anyway.
The takeaway here is that when considering build vs. buy, you have to consider what you’re getting out of it and what you want to be doing. If you want to build a business around software, pay someone to run the hardware for you. If you want to build a business around systems administration, run the hardware and make it easier for someone, including yourself to maintain the hardware-software relationship. I think most cloud providers have done a great job of managing the hardware, so I, a software builder, love delegating that responsibility to someone else.
Your excellent comment only addresses cost and flexibility. In a mission-critical system, I also want control and minimal liabilities. If its their machine, I can’t have full control over the setup. I might not legally have full control over what happens to my data. There might also be liabilities they introduce that I could avoid in a bare-metal setup hosted locally or in a cage at one or more datacenters.
Agreeing with the parent poster, you have to know your business/personal use cases.
Are hosting costs a significant part of opex? Is it critical to the core competence of your company?
Do you have a predictable or stable pattern of usage that allows you to make purchasing decisions a year or two out?
How does time to market compare between your own infra vs the cloud? If you launch a new project, is it significantly faster to provision things in the cloud?
How does actual stability/security/reliability and needed stability/security/reliability compare between your infra and the cloud?
It’s a huge matrix of parameters, not a single “do or do not cloud” question. This also means that “hybrid cloud” is something a lot of companies end up doing, where running your own infrastructure for a lot of things makes sense, while keeping some stuff in the cloud makes sense.
I used to be a lot more jaded on the cloud (“other people’s computers”) but possibly computers is a wrong word to use here. Infrastructure is better. We use private/public infrastructure every day (completely unrelated things to IT like roads and bridges) that we couldn’t afford to own or build on our own. For example, DNS infra is something that I would never host on my own (unless reliability doesn’t matter at all), it’s something where a global infra really shines.
On the other hand, cloud hosting on the VM/IaaS level is 1.5-2.5x as expensive as having a couple of racks in a datacenter and planning with a 3-5 year depreciation cycle. You have to run the numbers and decide the value of all those parameters that you can think about. Just don’t think you can save on systems engineer HR costs by moving to the cloud ;)
tl;dr: have a solid business plan and know where you want to be, or even quantify the uncertainty and decide based on that
Is Jeff Atwood under the impression that server colocation isn’t something that has been done almost forever? Perhaps with all the VPS providers and cloud platforms, people have forgotten about this option.
Yeah, it’s surprising it’s so rarely mentioned. It’s the first thing I look at since it lets customers get bare-metal benefits with extra ability to customize hardware. Also, that option only outsources the things that datacenter companies can do better or at least spread cost. That’s space, cooling, redundant power, redundant backbones, and so on.
Far as custom hardware, it can be useful for compatibility with specific OS’s, too, since the drivers on specific machines are known to work better. Well-known example is how Thinkpads work well with Linux and BSD’s. Niche example is all the high-security, separation kernels run on PowerPC boards that their aerospace companies prefer for some reason. If one wants security and evaluated configuration, then they gotta run it on expensive, PPC board. Colocation lets them do that.
Funny thing is, I suspect the reason a subtantial number of folks wanted to colo consumer hardware was specific to Macs: you can’t drop OS X on arbitrary server hardware. Otherwise, yeah, you have lots more options.
Even if you end up going Full Cloud, it’s good from a career and skills perspective to be able to do colo/bare-metal stuff. A surprising (to me) number of developers who claim to be full-stack don’t know or haven’t actually:
Calculated power budgets for hardware
Calculated budgets/deprecation on hardware
Assembled workstations/servers from parts
Tested/replaced dying hard drives or RAM
Run ethernet cable through/along walls or plenum (yuck fiberglass)
Run electrical wiring
Cut/crimped/tested ethernet
Provisioned Wifi access points
Setup VLANs, allocated leases, setup VPNs, etc.
I myself am not perfect in this either! I’ve never had to futz with vampire taps, nor run 3-phase power for big PDUs, nor setup and debugged trunking or demarcs.
That said, a whole lot of the ability to go from not-working to working-well-enough comes from having at least some exposure to the above skills…and a lot of this blue-collar IT stuff will put you ahead of your friends with Macbooks in coffee shops–if for no reason than your ability to give better estimates on when the breakpoint is to switch to hosted solutions.
Edit: in the spirit of not overselling how wonderful doing this is, I’d encourage folks with war stories to reply here with some cases where being the person doing those things really sucked.
I worked in the computer lab as a work study student in college, during the break they doubled our pay if we helped out with maintenance. I willingly/was drafted to hand over hand through false ceilings with a cord tied to my ankle so we could pull new ethernet runs into professors’ offices. Since one of my third career choice was Cat Burglar (6 year old me, after animator and astronaut), it felt apropos.
Also SPOILER ALERT these comments are rendered on the very Mini-PC being discussed in the blog post
A 5$ Droplet should be plenty to serve comments as well.
We are living in this weird time where chat servers and clients both need GBs of RAM. Meanwhile you could do nearly the same with a few MB (XMPP or IRC) as well. I don’t think server-side search and file uploads are good reasons for such requirements.
To me, using, somebody else’s anything is a problem, if I do not have any recourse to go back to my ‘own’.
Especially, if that ‘somebody’ is a single company.
Because then I had just created a business lock-in.
Whether it is a sign-in/login service, or back a up service, or compute tier, or traffic cache, or whatever.
I would always want to have
a) ability to shrink back to software/hardware capabilities that I own
b) ability to utilize, concurrently, multiple (and competing) providers for any 3rd party service I use.
Sure, if my startup model is designed to be ‘sold’ to speak, or at least lead by multiple partners with slightly different objectives (eg early exit, etc) , then investing, early, into anti-locking features, many not make much sense.
In the context, I was suggesting to avoid business lock-in.
In essence, by making sure that one should not use a cloud infrastructure, or any other run-time service – of a single provider.
The mitigation is to use multiple providers (or none, if economically feasible).
Of course, I use somebody else’s CPU, OS, database and compiler.
But, in my view, those are technology lock-ins more so, than business lock-ins.
And, with a few exceptions, they can (and, I suggest, should be) mitigated by the using, simultaneously, multiple choices (in OSs, compilers, CPU architectures). And by using as much mature open source as possible.
Somewhat separate, but I found intriguing, Linus’s assertion that ARM does not have a chance in server space, unless it is fully usable and accessible platform as a developer’s workstation. [1]
Not sure, if he is right or not. Future will tell.
I realize that there is a world of difference between cloud infrastructure usage for Dev vs Deployment,
but there is intersect of sort.
I am venturing to agree with Linus that a developer will prefer a workstation at their desk, most of the time.
–
Perhaps, it is also my mind set of a Developer, that tells me to avoid run-time dependency on somebody else’s business…
Between seeing what Oracle is doing to its long time customers,
a temporary change of Lerna open source license [2],
and recent policy motivated deplatforming incidents [3] (regardless of my personal believes)
make me, overall, very uncomfortable with business lock-ins.
More folks owning hardware and even “keep[ing] the internet fun and weird” seem more general and interesting than the very specific idea of colo’ing consumer hardware. Couple folks in comments there mentioned that there are common sub-1U platforms (1/2-1/3U servers, blades) but 1U is the smallest unit you can usually colo, making the minimum spend a bit larger. At work we use Amazon though I don’t think anyone really loves their dominance, and there may be differences in privacy protections between your servers and a provider’s rented ones.
I think that this is a textbook “build vs. buy” architectural problem. At a small scale, it’s so attractive to build: reasonable, affordable-at-the-moment startup costs with (hopefully) only time costs to maintain the bespoke solution for a predetermined period.
It’s a wager:
but with an extra caveat because of the nature of purchasing a subscription service: the absence of resale.
What if the cloud provider drops the cost of the service, either directly or by introducing a more powerful configuration at the same price? You could build a new server yourself for additional capital, put in the labor to manage the transition ahead of schedule. You’ve now got the same power but you’ve had to spend time migrating, and now you’ve got yesterday’s hardware to manage, too.
It probably makes sense to build when you’re:
That last point is where building really makes sense, but you have to have the credit at the start and address the resale problem once the startup cost is paid off.
What I’m getting at is that costs are an exercise is understanding variables and establishing trade-offs. Atwood inadequately advises this analysis in this post.
The “coolness” factor here is attractive for people who want to learn systems administration.
Maybe I’m burned out on it from doing it for so long that I appreciate cloud computing because it empowers me to focus on the creation of software and systems that don’t objectively care about the underlying hardware. I ran a server of my own, first on a VPS and then a bare metal dedicated server, where I administered a web server, mail servers, database servers, IRC bouncers, and more, and over time, it became this massive distraction when I had to fix things myself. Mail was a tremendous pain in the ass. When the hard drive died with ~ ten years of data on it, I learned the hard way the importance of backup verification when I discovered that the most recent backup was two years before the failure. For me, the tradeoff wasn’t worth it anymore. It was cheaper in every way except currency to rent a solution through paid email hosting; free or low-cost application hosting – Heroku, GitHub Pages, etc. didn’t exist when I built my first server on that VPS; and paid IRC hosting, which is now irrelevant to me but I use a free web-based bouncer-ish service on-demand. I never read the scrollback anyway.
The takeaway here is that when considering build vs. buy, you have to consider what you’re getting out of it and what you want to be doing. If you want to build a business around software, pay someone to run the hardware for you. If you want to build a business around systems administration, run the hardware and make it easier for someone, including yourself to maintain the hardware-software relationship. I think most cloud providers have done a great job of managing the hardware, so I, a software builder, love delegating that responsibility to someone else.
Your excellent comment only addresses cost and flexibility. In a mission-critical system, I also want control and minimal liabilities. If its their machine, I can’t have full control over the setup. I might not legally have full control over what happens to my data. There might also be liabilities they introduce that I could avoid in a bare-metal setup hosted locally or in a cage at one or more datacenters.
Agreeing with the parent poster, you have to know your business/personal use cases.
It’s a huge matrix of parameters, not a single “do or do not cloud” question. This also means that “hybrid cloud” is something a lot of companies end up doing, where running your own infrastructure for a lot of things makes sense, while keeping some stuff in the cloud makes sense.
I used to be a lot more jaded on the cloud (“other people’s computers”) but possibly computers is a wrong word to use here. Infrastructure is better. We use private/public infrastructure every day (completely unrelated things to IT like roads and bridges) that we couldn’t afford to own or build on our own. For example, DNS infra is something that I would never host on my own (unless reliability doesn’t matter at all), it’s something where a global infra really shines.
On the other hand, cloud hosting on the VM/IaaS level is 1.5-2.5x as expensive as having a couple of racks in a datacenter and planning with a 3-5 year depreciation cycle. You have to run the numbers and decide the value of all those parameters that you can think about. Just don’t think you can save on systems engineer HR costs by moving to the cloud ;)
tl;dr: have a solid business plan and know where you want to be, or even quantify the uncertainty and decide based on that
Is Jeff Atwood under the impression that server colocation isn’t something that has been done almost forever? Perhaps with all the VPS providers and cloud platforms, people have forgotten about this option.
Yeah, it’s surprising it’s so rarely mentioned. It’s the first thing I look at since it lets customers get bare-metal benefits with extra ability to customize hardware. Also, that option only outsources the things that datacenter companies can do better or at least spread cost. That’s space, cooling, redundant power, redundant backbones, and so on.
Far as custom hardware, it can be useful for compatibility with specific OS’s, too, since the drivers on specific machines are known to work better. Well-known example is how Thinkpads work well with Linux and BSD’s. Niche example is all the high-security, separation kernels run on PowerPC boards that their aerospace companies prefer for some reason. If one wants security and evaluated configuration, then they gotta run it on expensive, PPC board. Colocation lets them do that.
I cannot believe I just read that Macs are leading the way in the colocation space. Spoiler alert: I colocate a not Mac.
Funny thing is, I suspect the reason a subtantial number of folks wanted to colo consumer hardware was specific to Macs: you can’t drop OS X on arbitrary server hardware. Otherwise, yeah, you have lots more options.
Even if you end up going Full Cloud, it’s good from a career and skills perspective to be able to do colo/bare-metal stuff. A surprising (to me) number of developers who claim to be full-stack don’t know or haven’t actually:
I myself am not perfect in this either! I’ve never had to futz with vampire taps, nor run 3-phase power for big PDUs, nor setup and debugged trunking or demarcs.
That said, a whole lot of the ability to go from not-working to working-well-enough comes from having at least some exposure to the above skills…and a lot of this blue-collar IT stuff will put you ahead of your friends with Macbooks in coffee shops–if for no reason than your ability to give better estimates on when the breakpoint is to switch to hosted solutions.
Edit: in the spirit of not overselling how wonderful doing this is, I’d encourage folks with war stories to reply here with some cases where being the person doing those things really sucked.
I worked in the computer lab as a work study student in college, during the break they doubled our pay if we helped out with maintenance. I willingly/was drafted to hand over hand through false ceilings with a cord tied to my ankle so we could pull new ethernet runs into professors’ offices. Since one of my third career choice was Cat Burglar (6 year old me, after animator and astronaut), it felt apropos.
A 5$ Droplet should be plenty to serve comments as well.
We are living in this weird time where chat servers and clients both need GBs of RAM. Meanwhile you could do nearly the same with a few MB (XMPP or IRC) as well. I don’t think server-side search and file uploads are good reasons for such requirements.
The comments are run by discourse, the company he started. So I doubt he means he’s dedicated one entire machine just to comments.
To me, using, somebody else’s anything is a problem, if I do not have any recourse to go back to my ‘own’. Especially, if that ‘somebody’ is a single company. Because then I had just created a business lock-in.
Whether it is a sign-in/login service, or back a up service, or compute tier, or traffic cache, or whatever.
I would always want to have a) ability to shrink back to software/hardware capabilities that I own b) ability to utilize, concurrently, multiple (and competing) providers for any 3rd party service I use.
Sure, if my startup model is designed to be ‘sold’ to speak, or at least lead by multiple partners with slightly different objectives (eg early exit, etc) , then investing, early, into anti-locking features, many not make much sense.
But you have to. You have to use somebody else’s CPU, compiler, OS, so on and so forth. It’s too much for one person to do everything.
In the context, I was suggesting to avoid business lock-in.
In essence, by making sure that one should not use a cloud infrastructure, or any other run-time service – of a single provider. The mitigation is to use multiple providers (or none, if economically feasible).
Of course, I use somebody else’s CPU, OS, database and compiler.
But, in my view, those are technology lock-ins more so, than business lock-ins. And, with a few exceptions, they can (and, I suggest, should be) mitigated by the using, simultaneously, multiple choices (in OSs, compilers, CPU architectures). And by using as much mature open source as possible.
Somewhat separate, but I found intriguing, Linus’s assertion that ARM does not have a chance in server space, unless it is fully usable and accessible platform as a developer’s workstation. [1] Not sure, if he is right or not. Future will tell.
I realize that there is a world of difference between cloud infrastructure usage for Dev vs Deployment, but there is intersect of sort.
I am venturing to agree with Linus that a developer will prefer a workstation at their desk, most of the time.
–
Perhaps, it is also my mind set of a Developer, that tells me to avoid run-time dependency on somebody else’s business…
Between seeing what Oracle is doing to its long time customers,
a temporary change of Lerna open source license [2],
and recent policy motivated deplatforming incidents [3] (regardless of my personal believes)
make me, overall, very uncomfortable with business lock-ins.
–
[1] https://www.realworldtech.com/forum/?threadid=183440&curpostid=183486
[2] https://github.com/lerna/lerna/pull/1633
[3] https://reason.com/archives/2019/01/20/deplatforming
More folks owning hardware and even “keep[ing] the internet fun and weird” seem more general and interesting than the very specific idea of colo’ing consumer hardware. Couple folks in comments there mentioned that there are common sub-1U platforms (1/2-1/3U servers, blades) but 1U is the smallest unit you can usually colo, making the minimum spend a bit larger. At work we use Amazon though I don’t think anyone really loves their dominance, and there may be differences in privacy protections between your servers and a provider’s rented ones.