My personal progression for systems is to start with something like heroku, and only when that starts being insufficient, move to something containers-as-a-service (ECS etc), and when that starts being insufficient, I move towards something like Nomad on EC2.
There’s a lot to be said for using Packer to make AMIs and rolling them in an ASG too.
Agree - I feel like prescribing ECS as an entry-level tier is really overkill if you’re just a small team trying to find product-market fit. Also, it’s not like Heroku is in any way a dead end; there’s very little lock-in (except for maybe an addiction to convenience) so if you ever want to migrate to Docker you just add a few more lines to your Procfile and rename it to Dockerfile.
Last time I used ECS it still required quite a bit of configuration to get up and running (configuring an ingress, availability zones, RDS, ECR, setting up CI, etc etc). Also, there were a few common use cases that were surprisingly difficult to set up, such as running a one-off command on deploy (eg for triggering database migrations) or getting logs shipped to Kibana, things which can be done with literally a single line of config or a few clicks of the mouse on Heroku.
TBH I’d rather run on regular compute nodes on something like Digital Ocean and deploy with Ansible than use ECS. Kubernetes and ECS feel like they’re solutions for the problems of managing a compute fleet—most people don’t have a compute fleet, but by using Kubernetes they get the same level of complexity anyway
my biggest difficulty with heroku has been the pricing, at least for our use cases it felt pretty intense (but maybe I wasn’t properly comparing with our existnig cloud bill).
I mean I think the service is truly amazing but it’s a tough sell sometimes
Agreed - Heroku is really expensive once you go beyond the starter tiers. However, ECS and Kubernetes may be cheaper on paper, but what you’re really doing is trading hosting fees for man-hours. At a certain point that trade makes perfect sense (when you’ve got enough manpower) but I’ve seen several instances of people making the switch without realizing how costly it would be to do their own SRE.
I’ve always been surprised that the Packer -> AMI -> ASG approach isn’t more popular. (I mean, I get why it isn’t.) It can take you really really far. There was even a (short-lived) startup around the idea, Skyliner. It’s not very efficient, from a bin-packing/k8s-scheduling POV, but efficiency is not a high priority until you are at a scale where margins cause you real pain. So we’re in a place today where it is under-theorized, under-documented, and under-tooled, as an ops discipline, compared to more complex alternatives. Too bad, it’s like the best medium/middle-sized scale approach I know about.
This is written by a Kubernetes developer and the argument isn’t considered that one should not ever use Kubernetes. Nor that a few bare metal servers (without containers) might be most cost effective for, what I suspect is, 95% of companies.
Even if you’re big, sometimes. I was building a new service at a big cloud provider recently, and we ended up doing basic Linux VMs, no containers. Our service involved a number of stateful moving parts that had to interact with each other on the same system, and containerization would have made that far more difficult than straightforward systemd services.
I think it depends on the team, and what you’re building. At this point spinning up a new EKS cluster via Terraform is easy enough that I’d likely be going that route if I were in charge of tech at a new startup. I couldn’t imagine giving up the tooling ecosystem around it, especially around observability and deployments. Learning ‘simpler’ stacks would take me about the same amount of time as getting a solid development-grade EKS based cluster up and running.