1. 46
  1.  

  2. 9

    While using an embedded database in this way is possible, you should keep in mind that it could mean that your workload will not be able to migrate to a client/server architecture at a later point due to the latency costs.

    1. 5

      What I still find weird in modern architectures (aka micro-services) is that at the same time we discourage extra round-trips to the DB due to latency, but we think it is totally fine to call other services via http (which then internally does a DB call). It just makes no sense.

      1. 6

        Microservices are not a “modern” architecture, they’re an organizational scaling technique when you have hundreds of developers and one app. So each team is supposed to more or less treat other teams as being external services, and have to consider the fact that those external services are a source of latency.

        1. 3

          In my experience, they are a source of latency in every sense of the term, technical latency when a service communicate with another service over the network, but also organisational latency, when a team relies on another :) But this is almost unavoidable. They are pros and cons to having a large team.

        2. 1

          Most places that care about DB hops will also use some RPC to speed up reads and writes. Also, sending data over the wire to boxes through an already established connection in a connection pool isn’t that expensive.

        3. 2

          Small ”queries” to a network block storage like AWS EBS, Google Cloud Platform Persistent Disk or DigitalOcean Block Storage (based on Ceph) are pretty efficient, despite the network round trip. Would be interesting to analyze why it’s working quite well for block storage and not for SQL databases.

          1. 2

            Are there other examples of SQLite being used as a website backend database in production? What kind of scale could you reach with this approach? And what would be the limiting resource?

            1. 9

              Expensify was based exclusively on sqlite for a long time, then they created a whole distributed database thing on top of it.

              1. 6

                Clojars used SQLite for a good 10 years or so, only recently moving away to Postgres for ease of redeployment and disaster recovery. The asset serving was just static assets, but the website and deployments ran against SQLite pretty well.

                1. 3

                  If I remember correctly, the trouble that Clojars ran into had more to do with the quality of the JVM-based bindings to SQLite than they did with SQLite itself, at least during the portion of time that I was involved with the project.

                  1. 2

                    Yeah, looking back at the issues, “pretty well” is maybe a little bit generous. There were definitely settings available later on which would have helped the issues we were faxing around locking.

                2. 4

                  I can’t remember whom but at least one of the well funded dynamoDB style distributed database-y products from the mid 10s used it as the storage layer.

                  So all the novel stuff that was being done with data was the communication and synchronisation over the network, and then for persistence on individual nodes they used sqlite instead of reinventing the wheel.

                  1. 6

                    That was FoundationDB, purchased by Apple in 2013, then gutted, and then returned as open-source in 2018. I’m a bit annoyed, because it was headed to be CockroachDB half a decade earlier, and was taken off the market with very little warning.

                    1. 1

                      Thanks!

                  2. 3

                    You probably will get really fast performance for read-only operations. The overhead of client/server and network stack could be more than10x times of function calls from same address space. The only real limitation might be single server, since you cannot really efficiently scale sqlite beyond single system. But when you reach that scale, you usually needs much more than sqlite.

                    1. 2

                      The sqlite website claims to run entirely on sqlite.

                      They also have this page, though most of those aren’t websites: https://sqlite.com/mostdeployed.html

                    2. 2

                      I believe you still need to use the same connection during the 200 queries. Some Web frameworks open and close a new connection for each query thus making this technique less efficient.

                      1. 2

                        Yes. You should also keep the compiled query (sqlite3_stmt) object and reuse it, instead of compiling SQL every time. Do these things and SQLite is hella fast.

                      2. 2

                        200 SQL statements per webpage is excessive for client/server database engines like MySQL, PostgreSQL, or SQL Server.

                        Laughs in the ~5000 SQL statements executed on MySQL per user search at work.