1. 8
  1.  

  2. 2

    I think a faster way to do this is to use a LFSR generator to create a sequence, then select ‘rowid in (…)’. This will be constant time even for deep pages.

    1. 1

      Interesting. If I understand correctly, you’re saying I should generate a list of ids in memory on each request, then query the db for rows with those ids?

      1. 1

        Yeah. You can find lots of examples by searching for LFSR generator. It has a very small internal state that you can add to the request. Also used for the Doom fizzlefade effect. http://fabiensanglard.net/fizzlefade/index.php

        You can pick parameters based on your table size. One issue you might have is if you have missing rows, you’ll only get 18 or 19 rows instead of 20 sometimes, but that’s probably not a problem for infinite scroll. Or you select a few extra, etc.

        1. 1

          This is a very cool idea. The main downside I see is that it doesn’t play well with other WHERE criteria; if we’ve pre-selected the ids for page 3, but few or none of those records match a WHERE condition we want, we’re out of luck.

          But I’m definitely going to keep it in mind for future reference. It could be done even without LFSR, by pulling SELECT id FROM table, chopping it into pages in app memory, and caching it.

    2. 1

      I wonder bet the extra index would help noticeably if they were using edge pagination instead of limit+offset pagination?

      1. 1

        OP here. I’m not familiar with “edge pagination”, and my Googling fails me. Care to explain?

        (Is it the same as “keyset pagination”?)

        1. 2

          I think so. I’m not certain because different authors seem to like different terminology.

          I’ve got a table with an index on it, and I’m presenting the content of that table to the end user in the same order as that index. With every query, the user sends me the value of that index from the last element on the previous page that I sent them, and in my next query I’ll select up to (page size) records where the index value is > the “edge” that they sent me.

          e.g.

          CREATE TABLE dogs (name TEXT PRIMARY KEY);
          INSERT INTO dogs (name) VALUES ('Annabelle'), ('Bernard'), ('Clarence'), ('Daisy'), ('Edgar'), ('Francine'), ('Gerald');
          SELECT name FROM dogs ORDER BY name LIMIT 2; -- first page, returns ['Annabelle', 'Bernard']
          SELECT name FROM dogs WHERE name > 'Bernard' ORDER BY name LIMIT 2; -- second page, returns ['Clarence', 'Daisy']
          SELECT name FROM dogs WHERE name > 'Daisy' ORDER BY name LIMIT 2; -- third page, ['Edgar', 'Francine']
          SELECT name FROG dogs WHERE name > 'Francine' ORDER BY name LIMIT 2; -- last page, ['Gerald'], can tell that this is the last because I got fewer results than last time
          

          the idea being that the DB just has to find the starting point I pass it, which is one walk down a b-tree from the top, then it can walk along the b-tree siblings until it hits the LIMIT.

      2. 1

        If you order by an expression like this, the database will have to sequence scan the whole table in order for OFFSET to work. Maybe add a functional index covering the expression - that will remove the need for the sequence scan.