1. 50
  1.  

    1. 17

      Genuinely curious if the author ever used Heroku in its prime.

      There seems to be a number of these language/framework specific clouds popping up:

      • Vercel
      • Deno
      • Laravel
      • shuttle.rs for Rust

      I am sure there are others I am not aware of.

      They seem fine for small projects. I have not seen them used for anything large yet.

      1. 5

        I’m still using Heroku and very much appreciating it. Excited for the new Fir generation stuff.

        1. 1

          Last I knew, Urbandictionary was still running on Heroku.

        2. 7

          Thanks for writing this. I keep meaning to write a similar blog post praising deno and deno deploy. https://www.smallweb.run/ is a really cool extension of ideas being val.town and deno deploy where they extend denos idea of no-build to no-deploy. I think its a really nice successor for cgi-bin style simplicity.

          I basically have smallweb for my hobby projects that dont need geo distribution and can or must run my dev box and switch to deno deploy when I need geo.

          To make a new website you just mkdir and edit files, its crazy!

          1. 7

            I wonder what author would think about building and deploying a Go app.

            1. 3

              A single binary is nice, but that’s still so far away from a production-ready deployment. Golang doesn’t include tools for moving the binary around or for running it under something like systemd. It also doesn’t include data storage and backup tools.

              Perhaps something missing from my post is the idea of which pieces of infra are abstracted away and whether the abstraction is good or user-friendly. You can wrap a golang library in any constellation of other tools to get it production-ready, which gives you a lot of flexibility, but then you have to worry about those tools (they’re not abstracted away).

              Heroku can abstract away the infrastructure for you, but the abstraction leaks a little when you have to tune WEB_CONCURRENCY, dyno size, region, and various addons.

              When looking to the future, I think tools like Deno are interesting since although they force your app into a specific shape just like any framework or PaaS, the benefits you get and the user-friendliness are starting to sound pretty compelling to me.

              1. 4

                Golang doesn’t include tools for moving the binary around or for running it under something like systemd. It also doesn’t include data storage and backup tools.

                For another perspective/experience:

                I run a fediverse instance that happens to be written in Go¹, which has moved machines a couple of times. I literally just stop the service, rsync it to the new machine, and start the service again, all using standard distro tools/packages.

                I don’t need flexibility, since I can install the distro that I already know. Which also happens to be popular in containers, but it doesn’t seem like Docker actually abstracts it away, you still have to know Alpine things in addition to Docker things.

                1. Although that doesn’t really matter, could be C or Zig or anything that lets you create a statically linked binary, as we’ve done for ages. Moving Conduit was just as easy.
                1. 1

                  Out of curiosity, how do you transfer the state of your fediverse instance? State seems like the more cumbersome thing to transfer between servers—unless traffic is light enough that SQLite is sufficient?

                  1. 1

                    SQLite is more than sufficient at the scale targeted by GtS. So rsyncing the standard directories is all there is to it, same for all services I operate.

            2. 6

              Having integrated tools for a single central platform (Deno here) sure makes a lot of things easier for developers, good for the author! I’m sure PaaS like Heroku, Fly.io, (…) would happily fund the development of “$framework deploy” if that means more customers for them… :)

              At the same time, this also reads like “infrastructure is annoying, I don’t want to think about it”. This is fine and perfectly understandable for weekend projects, or bootstrapping a product. It will cost you more and more as you scale though, because integrated PaaS are billing you for all the stuff you don’t want to take care of. Then, you may need to consider the costs of outgrowing this very practical platform.

              1. 3

                There’s no reason it has to be commercial/proprietary or more expensive than traditional infrastructure. It’s only due to pay-as-you-go and value-based pricing models popularized by SaaS companies and cloud providers.

                I work as a platform engineer and I think our platform would be significantly cheaper to operate if we had a kind of self-hosted, open-source Deno Deploy to replace our EKS clusters. The devex is better, the security work required is lower, the infra overhead is minimized, less moving parts, etc.

                It’s just hard because existing applications assume to run in the same Linux-based environment that they’ve used since the 90s and it’s hard to remove some of those assumptions in exchange for easier/more secure infra.

              2. 5

                Is it possible to self-host Deno Deploy, or are you lock-in with their platform?

                1. 7

                  As far as I can remember, anything you can do on Deno Deploy also works with the Deno binary. The KV database is backed by SQLite when running that way, but you can also self-host a FoundationDB-based version (not sure if this is the same one that powers Deploy).

                  1. 4

                    You’re essentially locked in. You could build all the infra yourself and run your Deno code in the open-source runtime, but that’s lock-in to me.

                    The nice things about Deno IMO aren’t secret, patent-encumbered enterprise features; they’re just existing ideas brought together and executed well, something I believe the open source world is fully capable of doing :)

                  2. 4

                    I’m a deno fan and I still find this a bit of an odd reasoning.

                    1. 2

                      I like some of Deno’s ideas but I don’t find this too convincing. For instance, I like capabilities and Deno running TS directly (apparently Node can now do this now too), but it’s not enough for me to bet on a new-ish technology against Node. Deno KV is cool, but I usually need a database and would probably use SQLite or Postgres instead.

                      I don’t want to get locked into Deno either. It’s steered by a for-profit company and backed by Sequoia. They’re going to want their money back eventually.

                      1. 1

                        Yeah I certainly don’t intend this to be a “we should all use Deno” post. More of a “we should be incorporating all the good infra ideas from the last 2 decades”.

                        1. 1

                          That’s fair. I’m happy if Deno paves the way for these things to be implemented in other ecosystems.

                          Thanks for sharing your experience!

                      2. 1

                        Computers are so ridiculously powerful these days that it’s so weird we still have CI/CD pipelines that take tens of minutes. … Maybe if everyone wasn’t busy building ad tech and chat bots, we’d get somewhere.

                        I feel like it’s nearly always been like this. We rarely get to optimize or simplify our systems just because smaller and lighter is faster, cheaper, and longer-lived. Usually we do it to remove outright blockage. When the pain is relieved, we stop. When the blockage is gone, some other problem is now a bigger deal—the feature that could make money doesn’t exist yet. So we grow insensitive to lesser pains. Then one day we see 1,000 cuts, not necessarily because we noticed them ourselves, but because we found a new (or old) alternative with eye-opening tradeoffs.