Heroku was at least a decade ahead of its time as far as developer experience goes.
For sure. A testament to this fact is that my very first side project ever was deployed to Heroku, and 5+ years later it’s still rock-solid and my highest uptime app. It’s so hard to have abstractions that are leak-proof enough for novices yet still resilient and flexible.
Using Heroku made me finally understand that sometimes paying for the right software can pay huge dividends in time savings. Running servers well is not trivial!
I was recently cleaning up old accounts I no longer use, and my Heroku account was among them. I logged in, and saw I still had an app running. It was set up in 2012.
I gotta say that a big difficulty for trying to do a completely self-hosted setup is just the drudgery of setting up clean A/B deploys and running scripts cleanly, as well as having nice dashboards to see the status of deploys.
Lots of this is mostly fixed costs, and it’s not super duper hard, but Heroku offers it out of the box, and for tinier projects it’s great, cuz you can actually get to work super quickly!
But what would be awesome is to have a very straightforward “LAMP Stack 2.0” that works well with Git-based deploy cycles. I’m sure there are Heroku-likes that I can self run, and would love to see a run down of the state of the art for this space.
For static sites, it’s very nice to deploy to Netlify because it’s all Git-based, with automatic deploy previews, and you can also do Lambda functions with Netlify for dynamic stuff, but performance suffers if you try to run a dynamic site off of it, because of the cold start times.
Agree - Netlify is really the Heroku of static sites. If you can’t be bothered to set up the Git integration they even have a “drag and drop” file uploader - it feels like I’m back in 1998 again and updating my website with CuteFTP!
The main problem with Dokku is that it gets the first 90% of Heroku (getting apps into the cloud) but not the next 90% of Heroku (gracefully handling your app server catching on fire).
I feel certain I’m missing something… I never cared for Heroku. It always seemed slow, made me think I had to jump through weird hoops, and never seemed to work very well for anything that needed more horsepower than github/gitlab’s “pages” type services. And their pricing always had too much uncertainty for me.
Granted, I’m old, and I was old before heroku became a thing.
But ever since bitbucket and github grew webhooks, I lost interest in figuring out Heroku.
What am I missing? Am I just a grouch, or is there some magical thing I don’t see? Am I the jerk shaking my fist at dropbox, saying an FTP script is really just the same? Or am I CmdrTaco saying “No wireless. Less space than a Nomad. Lame.”? Or is it just lame?
By letting and making developers only care about the code they develop, not anything else, they empower productivity because you just can’t yak shave nor bikeshed your infra nor deployment process.
Am I the jerk shaking my fist at dropbox, saying an FTP script is really just the same?
By letting and making developers only care about the code they develop, not anything else, they empower productivity because you just can’t yak shave nor bikeshed your infra nor deployment process.
What was it about Heroku that enabled that in some distinctive way? I think I have that with gitlab pages for my static stuff and with linode for my dynamic stuff. I just push my code, and it deploys. And it’s been that way for a really long time…
I’m really not being facetious as I ask what I’m missing. Heroku’s developer experience, for me, has seemed worse than Linode or Digital Ocean. (I remember it being better than Joyent back in the day, but that’s not saying much.)
If you had this set up on your Linode or whatever, it’s probably because someone was inspired by the Heroku development flow and copied it to make it work on Linode. I suppose it’s possible something like this wired into git existed before Heroku, but if so it was pretty obscure given that Heroku is older than GitHub, and most people had never heard of git before GitHub.
was inspired by the Heroku development flow and copied it to make it work on Linode
Only very indirectly, if so. I never had much exposure to Heroku, so I didn’t directly copy it. But push->deploy seemed like good horse sense to me. I started it with mercurial and only “made it so” with git about 4 years ago.
Since you’re a former Heroku employee, though… what did you see as your distinctive advantage? Was it just the binding between a release in source control and a deployment into production, or was it something else?
Since you’re a former Heroku employee, though… what did you see as your distinctive advantage? Was it just the binding between a release in source control and a deployment into production, or was it something else?
As a frequent customer, it was just kind of the predictability. At any point within the last decade or so, it was about three steps to go from a working rails app locally to a working public rails app on heroku. Create the app, push, migrate the auto-provisioned Postgres. Need to backup your database? Two commands (capture & download). Need Redis? Click some buttons or one command. For a very significant subset of Rails apps even today it’s just that few steps.
I don’t really know anything about the setup you’re referring to, so I can only compare it to what I personally had used prior to Heroku from 2004 to 2008, which was absolutely miserable. For the most part everything I deployed was completely manually provisioned; the closest to working automated deploys I ever got was using capistrano, which constantly broke.
Without knowing more about the timeline of the system you’re referring to, I have a strong suspicion it was indirectly inspired by Heroku. It seems obvious in retrospect, but as far as I know in 2008 the only extant push->deploy pipelines were very clunky and fragile buildbot installs that took days or weeks to set up.
The whole idea that a single VCS revision should correspond 1:1 with an immutable deployment artifact was probably the most fundamental breakthru, but nearly everything in https://www.12factor.net/ was first introduced to me via learning about it while deploying to Heroku. (The sole exception being the bit about the process model of concurrency, which is absolutely not a good general principle and only makes sense in the context of certain scripting-language runtimes.)
I was building out what we were using 2011-2013ish. So it seems likely that I was being influenced by people who knew Heroku even though it wasn’t really on my radar.
For us, it was an outgrowth of migrating from svn to hg. Prior to that, we had automated builds using tinderbox, but our stuff only got “deployed” by someone running an installer, and there were no internet-facing instances of our software.
By letting and making developers only care about the code they develop
This was exactly why I never really liked the idea of it, even though the tech powering it always sounded really interesting. I think it’s important to have contextual and environmental understanding of what you’re doing, whatever that may be, and although I don’t like some of the architectural excesses or cultural elements of “DevOps”, I think having people know enough about what’s under the hood/behind the curtain to be aware of the operational implications of what they’re doing is crucial to be able to build efficient systems that don’t throw away resources simply because the developer doesn’t care (and has been encouraged not to care) about anything but the code. I’ve seen plenty of developers do exactly that, not even bothering to try and optimise poorly-performing systems because “let’s just throw another/bigger dyno at it, look how easy it is”, and justifying it aggressively with Lean Startup quotes, apparently ignorant of the flip-side of “developer productivity at all costs” being “cloud providers influencing ‘culture’ to maximize their profits at the expense of the environment” - and I’ve seen it more on teams using Heroku than anywhere else because of the opaque and low-granularity “dyno” resource division. It could be that you can granularize it much more now than you could a few years ago, I haven’t looked at it for a while, and maybe even that you could then if you dug really deep into documentation, but that was how it was, and how developers used (and were encouraged to use) it - and to me it always seemed like it made the inability to squeeze every last drop of performance out of each unit almost a design feature.
Between this, Brandur’s post a couple days ago,, and a bundler/deploy issue I fought with for a few hours spread over a week, I’m kind of sad it’s feeling like Heroku’s over. I’ve got dozens of stupid little apps that’ve been running for a decade, and nothing else comes close. I’ve looked at the AWS and Azure knockoffs and they seem like so much work. Glitch is neat but it seems like it’s only for “stupid little” apps, nothing with a database or longevity.
There are lots of “let’s make infrastructure hosting simple again” companies (Render, Railway, etc), but they always seem to me to be in quite precarious situations: they can be great for small scale use, but as startups grow they will want access to the complete suite of services offered by major cloud providers. Their best customers are constantly leaving.
Disclosure: I’m obviously biased; I’m one of the founders of Encore, where we’re working on decoupling the developer experience from the cloud provider. We’re bringing a Heroku-level (or rather, even better) developer experience to your own cloud account.
It always seemed strange to me to couple infrastructure with developer experience, when the incentives aren’t aligned at all. Just like gym memberships, cloud providers have little incentive to tell you how much you could be saving by setting up something simpler that does the same trick. By separating the developer experience from the hosting provider this problem goes away.
Oh yeah. There was also Google AppEngine – the original one which they killed a couple years ago – which was weirder, more married to specific custom stacks, but just as good at “deploy and forget”, and with decent deployment experience too.
Heroku is great. From what I understand on the business side they had a real problem: it runs on top of AWS. At enterpise scale, where saas companies usually make all their money, it’s cheaper to hire some ops engineers to manage your deployments directly on AWS than it is to pay Heroku to do it for you. So Salesforce couldn’t figure out how to make it make money, and it never became a big driver of their platform integrations either.
I’ve used Clever Cloud for years, and I’m pretty happy with them. It is not as integrated as Netlify (no built-in A/B setup, deploy previews, etc), but it has saved my bacon multiple times when it comes to handling production loads.
For sure. A testament to this fact is that my very first side project ever was deployed to Heroku, and 5+ years later it’s still rock-solid and my highest uptime app. It’s so hard to have abstractions that are leak-proof enough for novices yet still resilient and flexible.
Using Heroku made me finally understand that sometimes paying for the right software can pay huge dividends in time savings. Running servers well is not trivial!
I was recently cleaning up old accounts I no longer use, and my Heroku account was among them. I logged in, and saw I still had an app running. It was set up in 2012.
Heroku was way, way ahead of its time.
I gotta say that a big difficulty for trying to do a completely self-hosted setup is just the drudgery of setting up clean A/B deploys and running scripts cleanly, as well as having nice dashboards to see the status of deploys.
Lots of this is mostly fixed costs, and it’s not super duper hard, but Heroku offers it out of the box, and for tinier projects it’s great, cuz you can actually get to work super quickly!
But what would be awesome is to have a very straightforward “LAMP Stack 2.0” that works well with Git-based deploy cycles. I’m sure there are Heroku-likes that I can self run, and would love to see a run down of the state of the art for this space.
For static sites, it’s very nice to deploy to Netlify because it’s all Git-based, with automatic deploy previews, and you can also do Lambda functions with Netlify for dynamic stuff, but performance suffers if you try to run a dynamic site off of it, because of the cold start times.
Agree - Netlify is really the Heroku of static sites. If you can’t be bothered to set up the Git integration they even have a “drag and drop” file uploader - it feels like I’m back in 1998 again and updating my website with CuteFTP!
I love the Heroku experience, it’s a shame what’s happening to them :/
I’ve used Dokku a few times to achieve a similar deployment style and found it to be pretty good!
The main problem with Dokku is that it gets the first 90% of Heroku (getting apps into the cloud) but not the next 90% of Heroku (gracefully handling your app server catching on fire).
yeah totally. It’s definitely a nicer experience than just running and building docker containers yourself, but in no means a replacement :(
Yep! We use dokku at work and it’s been a really powerful and lovely system to get to use. I’m thinking of refactoring my own servers to use it
I feel certain I’m missing something… I never cared for Heroku. It always seemed slow, made me think I had to jump through weird hoops, and never seemed to work very well for anything that needed more horsepower than github/gitlab’s “pages” type services. And their pricing always had too much uncertainty for me.
Granted, I’m old, and I was old before heroku became a thing.
But ever since bitbucket and github grew webhooks, I lost interest in figuring out Heroku.
What am I missing? Am I just a grouch, or is there some magical thing I don’t see? Am I the jerk shaking my fist at dropbox, saying an FTP script is really just the same? Or am I CmdrTaco saying “No wireless. Less space than a Nomad. Lame.”? Or is it just lame?
By letting and making developers only care about the code they develop, not anything else, they empower productivity because you just can’t yak shave nor bikeshed your infra nor deployment process.
Yes and you’d be very late to it.
That’s what I was referencing :-)
I think I’m missing this, though:
What was it about Heroku that enabled that in some distinctive way? I think I have that with gitlab pages for my static stuff and with linode for my dynamic stuff. I just push my code, and it deploys. And it’s been that way for a really long time…
I’m really not being facetious as I ask what I’m missing. Heroku’s developer experience, for me, has seemed worse than Linode or Digital Ocean. (I remember it being better than Joyent back in the day, but that’s not saying much.)
If you had this set up on your Linode or whatever, it’s probably because someone was inspired by the Heroku development flow and copied it to make it work on Linode. I suppose it’s possible something like this wired into git existed before Heroku, but if so it was pretty obscure given that Heroku is older than GitHub, and most people had never heard of git before GitHub.
(disclaimer: former Heroku employee here)
me
Only very indirectly, if so. I never had much exposure to Heroku, so I didn’t directly copy it. But push->deploy seemed like good horse sense to me. I started it with mercurial and only “made it so” with git about 4 years ago.
Since you’re a former Heroku employee, though… what did you see as your distinctive advantage? Was it just the binding between a release in source control and a deployment into production, or was it something else?
As a frequent customer, it was just kind of the predictability. At any point within the last decade or so, it was about three steps to go from a working rails app locally to a working public rails app on heroku. Create the app, push, migrate the auto-provisioned Postgres. Need to backup your database? Two commands (capture & download). Need Redis? Click some buttons or one command. For a very significant subset of Rails apps even today it’s just that few steps.
I don’t really know anything about the setup you’re referring to, so I can only compare it to what I personally had used prior to Heroku from 2004 to 2008, which was absolutely miserable. For the most part everything I deployed was completely manually provisioned; the closest to working automated deploys I ever got was using capistrano, which constantly broke.
Without knowing more about the timeline of the system you’re referring to, I have a strong suspicion it was indirectly inspired by Heroku. It seems obvious in retrospect, but as far as I know in 2008 the only extant push->deploy pipelines were very clunky and fragile buildbot installs that took days or weeks to set up.
The whole idea that a single VCS revision should correspond 1:1 with an immutable deployment artifact was probably the most fundamental breakthru, but nearly everything in https://www.12factor.net/ was first introduced to me via learning about it while deploying to Heroku. (The sole exception being the bit about the process model of concurrency, which is absolutely not a good general principle and only makes sense in the context of certain scripting-language runtimes.)
I was building out what we were using 2011-2013ish. So it seems likely that I was being influenced by people who knew Heroku even though it wasn’t really on my radar.
For us, it was an outgrowth of migrating from svn to hg. Prior to that, we had automated builds using tinderbox, but our stuff only got “deployed” by someone running an installer, and there were no internet-facing instances of our software.
This was exactly why I never really liked the idea of it, even though the tech powering it always sounded really interesting. I think it’s important to have contextual and environmental understanding of what you’re doing, whatever that may be, and although I don’t like some of the architectural excesses or cultural elements of “DevOps”, I think having people know enough about what’s under the hood/behind the curtain to be aware of the operational implications of what they’re doing is crucial to be able to build efficient systems that don’t throw away resources simply because the developer doesn’t care (and has been encouraged not to care) about anything but the code. I’ve seen plenty of developers do exactly that, not even bothering to try and optimise poorly-performing systems because “let’s just throw another/bigger dyno at it, look how easy it is”, and justifying it aggressively with Lean Startup quotes, apparently ignorant of the flip-side of “developer productivity at all costs” being “cloud providers influencing ‘culture’ to maximize their profits at the expense of the environment” - and I’ve seen it more on teams using Heroku than anywhere else because of the opaque and low-granularity “dyno” resource division. It could be that you can granularize it much more now than you could a few years ago, I haven’t looked at it for a while, and maybe even that you could then if you dug really deep into documentation, but that was how it was, and how developers used (and were encouraged to use) it - and to me it always seemed like it made the inability to squeeze every last drop of performance out of each unit almost a design feature.
Between this, Brandur’s post a couple days ago,, and a bundler/deploy issue I fought with for a few hours spread over a week, I’m kind of sad it’s feeling like Heroku’s over. I’ve got dozens of stupid little apps that’ve been running for a decade, and nothing else comes close. I’ve looked at the AWS and Azure knockoffs and they seem like so much work. Glitch is neat but it seems like it’s only for “stupid little” apps, nothing with a database or longevity.
The closest I found is https://platform.sh/
They really got what made the Heroku DX.
There are lots of “let’s make infrastructure hosting simple again” companies (Render, Railway, etc), but they always seem to me to be in quite precarious situations: they can be great for small scale use, but as startups grow they will want access to the complete suite of services offered by major cloud providers. Their best customers are constantly leaving.
Disclosure: I’m obviously biased; I’m one of the founders of Encore, where we’re working on decoupling the developer experience from the cloud provider. We’re bringing a Heroku-level (or rather, even better) developer experience to your own cloud account.
It always seemed strange to me to couple infrastructure with developer experience, when the incentives aren’t aligned at all. Just like gym memberships, cloud providers have little incentive to tell you how much you could be saving by setting up something simpler that does the same trick. By separating the developer experience from the hosting provider this problem goes away.
Heroku was magic, totally agree. First time I used it I was like… WOW, why doesn’t everything work like this?
Oh yeah. There was also Google AppEngine – the original one which they killed a couple years ago – which was weirder, more married to specific custom stacks, but just as good at “deploy and forget”, and with decent deployment experience too.
Heroku is great. From what I understand on the business side they had a real problem: it runs on top of AWS. At enterpise scale, where saas companies usually make all their money, it’s cheaper to hire some ops engineers to manage your deployments directly on AWS than it is to pay Heroku to do it for you. So Salesforce couldn’t figure out how to make it make money, and it never became a big driver of their platform integrations either.
I’ve used Clever Cloud for years, and I’m pretty happy with them. It is not as integrated as Netlify (no built-in A/B setup, deploy previews, etc), but it has saved my bacon multiple times when it comes to handling production loads.