1. 21
  1.  

  2. 4

    One question I have about this pattern is how to handle the data. If I keep it in SQLite, then it’s not in the right format to check into Git. If I keep it as CSVs or JSON, then SQLite just becomes a boring implementation detail that doesn’t add much versus other ways of searching and indexing.

    1. 4

      Yeah I’m not sure what the point of the SQLite is if your data is read-only. You’ll need it in separate files for version control, so any indexes or denormalisation, or views you need could all be done at build time instead of compiling all your data into SQLite and doing those things at runtime using SQLite.

      1. 5

        SQLite gives you the ability to easily run server-side SQL queries against it. That’s useful even against small amounts of data, and super-useful once you data grows over 100MB or so.

        I often use this for search (since SQLite has great FTS built in) - eg https://datasette.io/-/beta?q=fts which currently searches over 1500 items.

        I also use it for things like the “Use location” button on https://www.niche-museums.com/ - only just over 100 items at the moment but I hope to continue growing that for years to come.

        1. 2

          I suppose if you still need or want to rely on SQL itself as the language for expressing queries then you would want this. Otherwise even if you have an in-memory data structure, you would need to implement the lookups and what not yourself.

          1. 1

            Yeah, I see the value of using Datasette: hey, someone built a whole database viewer website for me, so I don’t have to! I see less value in using SQLite for some arbitrary other baked data site, as opposed to “the data are a bunch of CSVs/JSON files that get get loaded into memory as needed”. It would have to be something where, e.g., I want SQLite to be my full text search engine specifically.

        2. 1

          So, I think I ended up doing something similar with https://bible.junglecoder.com (see https://github.com/yumaikas/rumination/tree/main/json_bible). I didn’t have a name for it at the time, but yeah.

          1. 1

            That’s a similar pattern but not quite the same because it looks like you don’t have any server-side code running against the data.

            1. 1

              I do have a tiny amount of logic in https://github.com/yumaikas/rumination/blob/main/serve.janet, but none of it is transforming the data as it stands.

              1. 1

                My apologies, yeah that’s totally the baked data pattern.

          2. 1

            I like keeping the data in git - most of my sites using this pattern have content stored in a git repo as YAML or as a bunch of Markdown files. I also often pull the data from CSV files stored in git (since a bunch of places publish data as CSV on GitHub these days - eg the data I pull into https://covid-19.datasettes.com )

          3. 2

            So… a server-side rendered web app, only using SQLite instead of Postgres or MySQL.

            The Matrix has been altered recently, it seems

            1. 2

              You might potentially be able to use the same trick against MySQL and PostgreSQL too - the idea here is shopping a packaged copy of your data to a stateless deployment environment, which works for any database that can run off a read-only filesystem.

              Here’s a write-up from someone who got read-only Cloud Run working with a ClickHouse database: https://alexjreid.dev/posts/clickhouse-on-cloud-run/

            2. 1

              I used to distribute a huge set of files on CD. Along with the set, I provided an HTML index. The index was generated before writing to the CD. You could click the headers of colums in the HTML tables to sort: When you did, you were directed to the pages that were sorted in the order you wanted. No code, just lots of autogenerated HTML.

              I seem to remember that autorun also worked back then, so this was ‘slick’ by those days’ standards.