1. 92
  1. 40

    It’s the standard bearer of the “single process application”: the server that runs on its own, without relying on nine other sidecar servers to function.

    Now there’s an “SPA” I can get behind.

    1. 3

      the server that runs on its own, without relying on nine other sidecar servers to function.

      And what happens when that single server fails?

      1. 20

        You start a new instance and issue litestream recover s3://…. or whatever and go on your way… That’s the product.

        The post also outlines future enhancements that will do read-replicas, so only writes would be unavailable while the replacement server boots.

        1. 11

          Not every program needs All The Nines.

          1. 9

            What happens when your single Postgres server goes down? The whole service is down. Same thing.

            1. 10

              I have never used a single Postgres server in production. Have you?

              Postgres failovers have been 100% reliable for me, but that requires tradeoffs in terms of setup complexity and write latency. I am perfectly happy to take a slightly more complex setup involving ‘one more sidecar server’, thank you.

              1. 4

                Sure. Back in the 1990’s, mind you. Losing a box and a Postgres instance is fairly rare and most web apps don’t have really massive uptime requirements.

        2. 19

          SQLite is cool, but giving up the great type and integrity features of Postgres just to avoid running one more process seems like a bad trade-off for most of my applications.

          1. 13

            One thing I have learned recently is that SQLite has CREATE TABLE ... STRICT for type checking, because I felt the same pain moving from Postgres for a small CLI application. Could you elaborate on what integrity means here?

            More on STRICT here: https://www.sqlite.org/stricttables.html

            1. 6

              In PostgreSQL one can not only have foreign keys and basic check constraints (all present in one form or another in SQLite), but one can even define his own types (called “domains”) with complex structures and checks. See https://www.postgresql.org/docs/current/sql-createdomain.html

              1. 2

                I haven’t tried this for a very long time but I seem to recall that SQLite provides arbitrary triggers that can run to validate inputs. People were using this to enforce types before STRICT came along and it should allow enforcing any criteria that you can express in a Turing-complete language with access to the input data.

                1. 9

                  Triggers might be functionally equivalent, but with PostgreSQL custom types (i.e. domains) not only is it more easy and practical to use, but it can also be safer because the constraints are applied everywhere that type is used, and the developer isn’t required to make sure he has updated the constraints everywhere. (Kind of like garbage collection vs manual memory management; they both work, both have their issues, but the former might lead to fewer memory allocation issues.)

              2. 1

                Oooh, that’s new. I’ll have to use that.

            2. 15

              In my experience the biggest reason I would not use SQLite on the server is its poor concurrency. Even if you have a single-process multi-threaded server and take advantage of SQLite’s unlock notification functionality, PostgreSQL will absolutely murder it on any sort of concurrent write-heavy’ish workload. Also, SQLite uses a really counter-intuitive locking sequence (I don’t remember the details off the top of my head but can dig it up) so you often have to use BEGIN IMMEDIATE to avoid being drowned in deadlocks.

              1. 18

                If you’ve been reading the recent wave of SQLite-on-the-server posts from a couple companies, the thing they have in common is low write loads.

                Which is surprisingly common in more service-y setups — a single giant codebase, sure, SQLite isn’t the right fit, but once you’re doing some more focused services I think it would be rare not to have at least a few that are write-light or even effectively read-only. I’ve been working through this myself lately and starting to come around to the idea of using SQLite for those cases just because of the reduced number of moving parts to worry about.

                1. 4

                  I think most (but not all) write-intensive SQL use-cases are for metrics or analytical tasks. (Because I’m assuming if one needs some persistent, but not critically persistent, state such as sessions, one would use some better suited system like Redis or any other generic KV store.)

                  In such scenarios I think nothing can beat a purposely built system like ClickHouse in both terms of raw IO, concurrency and throughput.

                  Thus, setting these aside, I don’t think write concurrency plays a huge role until the application becomes very (as in viral) successful.

                  1. 5

                    I don’t really think this is true. CRUD in general is often write heavy. It just depends on the details of the domain. Keeping state of an MMORPG? Very much write and read heavy.

                    1. 1

                      Keeping state of an MMORPG? Very much write and read heavy.

                      Even with classical SQL databases like PostgreSQL or MySQL, write heavy applications tend to be programmed in such a manner that lock contention is reduced as much as possible; there might be multiple writers, but the code is designed so that they don’t touch the same rows.

                      Thus at that point one doesn’t actually use the DB for its SQL capabilities, but instead more like a document or KV store; therefore perhaps a proper KV store might actually be the best solution…

                      1. 2

                        Why isn’t a SQL database a “proper” KV store? How exactly do you think SQL databases store rows on disk? It’s a KV store.

                        If 99% of your write workload is non-conflicting, why does that imply you should absolutely fuck yourself for the remaining 1% by using a “proper” KV store with inferior transaction support and inferior query ergonomics? Or worse, eventual consistency?

                        1. 1

                          Why isn’t a SQL database a “proper” KV store? How exactly do you think SQL databases store rows on disk? It’s a KV store.

                          Indeed a relational database can be seen as a KV store, where the key is the primary key and the value is the whole row, but in certain cases a plain-simple KV store (like LMDB that does include transactions) is perhaps more than enough (and simpler)…

                          1. 3

                            Why is it simpler? If I literally never need to look up anything in my entire application by anything other than the primary key, maybe. But the instant I want to search for a value by some secondary key, I’m stuck manually implementing secondary indexes, and maintaining those indexes correctly as values are added, removed, and changed. In SQLite I type CREATE INDEX and I’m done. That is far simpler.

                        2. 2

                          Thus at that point one doesn’t actually use the DB for its SQL capabilities, but instead more like a document or KV store; therefore perhaps a proper KV store might actually be the best solution…

                          This is a strange statement. Avoiding lock contention is a fact of life with all databases. How does it somehow make it unnatural for SQL?

                    2. 1

                      How write-heavy are we talking? And are you talking into account SQLite’s WAL journalling mode?

                    3. 7

                      To be clear, I’m a huge fan of SQLite. I use it everywhere, if I need to analyze some data locally (= I load CSV in sqlite3’s cli and start adding indexes and running queries), as an application format, and also as a data-store for small web apps.

                      However, I don’t think it’s appropriate for every use. For example, I use Postgres as the backend for my mail server. With SQLite I would have to run postfix and dovecot at least in the same unix group (and possibly in the same SELinux context.) With Postgres I can have my aliases stored in a different nspawn container! I can also hide the table containing user password from postfix, but authorise reads from dovecot.

                      All I’m saying is: SQLite is great if you don’t care about privilege separation.

                      1. 6

                        It’s also great for deploying read-only websites. My photo gallery is published as an S3 bucket full of images and a SQLite file; the web server is a Clojure application that has a local copy of the DB. Currently the deployment script works by downloading the DB from S3 and shoving it onto the web server’s file system, but there’s no reason the web server couldn’t just periodically fetch the DB for itself.

                        1. 3

                          This might be a fun option for replicating your database to the web server. https://litestream.io/

                          1. 1

                            Possibly, although rsync has served me well enough so far. If I had more traffic, I might be concerned about requests hitting a partially-written SQLite file, but then I could just switch to doing atomic file moves or a blue-green pair of DB files. I haven’t looked into Litestream, but whatever it is, it’s probably more complicated than that.

                            1. 1

                              It almost isn’t. One could think of litestream as rsync for sqlite formatted files, with the option of doing continuous rsync.

                              1. 2

                                Mmm. But rsync is everywhere already, which automatically makes it simpler. :-)

                                If I needed continuous uptime, the application code to do periodic fetches and swap-outs on the DB would probably take just a couple hours to write.

                                1. 3

                                  I’m not trying to convince you to switch. I’m just saying litestream is probably a good replacement, if you ever find yourself needing to go beyond the lazy of rsync.

                                  If I needed continuous uptime, the application code to do periodic fetches and swap-outs on the DB would probably take just a couple hours to write.

                                  If you find yourself thinking of actually doing this; I almost guarantee Litestream would take less time and do it in a more foolproof way.

                          2. 2

                            That’s a nice setup that I’m going to keep in the back of my mind.

                            1. 1

                              One other aspect is that when I sync to the server, this is the process:

                              1. Upload all new image files
                              2. Upload updated DB
                              3. Delete all stale image files

                              That way there’s very little chance of a dangling reference – even if the process is interrupted in the middle.

                              (Images are all content-addressed as well.)

                          3. 2

                            Being a fan of fly.io and litestream, nice to know those good engineers are working together. Also, the one-writer multi-reader model works well for most applications, without pre-rendering the entire site to static html.