1. 39
    1. 7

      What I’d like to see most is support for efficient pagination. If you have a few billion rows, it should be as fast to scroll to the last page as the second page. Maybe a new table type is needed to enable this. FoxPro and Clipper were able to do this decades ago.

      1. 3

        How did pagination work for FoxPro and Clipper? I can’t imagine how a database table could support O(1) insertions and deletions + O(1) access to the nth element + concurrent transactional reads and writes. So they must have been compromising on one or more of these properties. Depending on what exactly they compromise, I’m wondering whether it would be possible to implement the same kind of access pattern in Postgres as a set of tables, indexes and triggers.

        1. 5

          The big challenge would be cache management, since you’d basically be materializing the results of every query that needed to support fast pagination, and you’d need to know when it was safe to discard those cached results. “When is the user done looking at this?” is always a tricky question in a stateless web app.

          If you were willing to pin user sessions to database connections, you could probably get pretty far with temporary tables or with keeping cursors open across user interactions. That’s how database-backed desktop apps worked back in the day when I was writing them.

          My hunch is that to the extent fast pagination is harder in PostgreSQL than it used to be in old-school database systems, it’s more because the client side is radically different than because the database is less capable.

          1. 3

            You’re not wrong, but unfortunately Postgres has a limit on the number of simultaneous connections, usually on the order of hundreds or thousands.

            The solution would be to materialize the results to either another store or unclogged tables that are periodically cleaned up.

            1. 1

              That is good news. For the type of CRUD applications I’m thinking of, thousands of users is more than enough. If more users are needed, fast scrolling of tables with billions of records will need to be rethought, maybe eliminated from the design.

        2. 2

          Doesn’t postgres have O(log N) insertion/deletion/access because it’s a b-tree underneath? You could augment the b-tree with order statistics (a la an order statistic tree), but that would come at a cost for a feature many users wouldn’t benefit from. I don’t know enough about postgres internals to know if there would be a way to enable it as an optional table feature in a low cost way.

        3. 1

          Clipper and FoxPro are from the days before networking; there was no concurrency. Also, FoxPro wasn’t O(1), but it was very very fast at what it did. A little language that made noSQL databases and tables super easy to work with for making CRUD applications.

          1. 3

            I used Clipper and then FoxPro as part of two technical support call center jobs for case management. Maybe the application didn’t handle networking directly but it was definitely used in a networking environment. In our case this was provided by Novell NetWare under MS-DOS. It was fast and when customers called in we would always have to ask for a “customer number” (which allowed us to probably get O(1) back then). When customers didn’t know that we had ways to look it up by first name and last name (which was also pretty quick). Now we never had a billion customers but I’m pretty sure we had thousands of customers and the same application was able to lookup invoices as well. It’s funny to think about how fast this solution was back then compared to how slow Salesforce is now with their own case management solutions.

            1. 1

              Yeah. With the database backend secured, I’d like to make a lisp package to do all the Foxpro convenience functions.

      2. 1

        Using cursors is an option that works pretty well. Also it’s possible with the right indices, not supporting arbritary pagination, to query without LIMIT and OFFSET but with WHERE order_col(id) > order_col(last_id_of_prev_pagination).

        1. 1

          Thank you, that is O(1)?

          1. 1

            More O(your where clause), it’s difficult to extricate that, but yes this technique (“keyset pagination”) has roughly constant performance compared to offset pagination’s proportional increase with page distance.

            1. 1

              I’ll give it a whirl, thanks

    2. 3

      The number one feature I’d like is easier major version upgrades.

      1. 1

        What’s so bad about pg_upgrade? Or does MySQL/MariaDB do the same better somehow?

        1. 3

          What’s so bad about pg_upgrade?

          The whole dance of needing the old and new binaries present at the same time. Needing to do the switcheroo with the data directory (it won’t do it in place) The fact it refuses to start when there’s a version mismatch, forcing downtime.

          Or does MySQL/MariaDB do the same better somehow?

          I am no fan of MySQL and run PostgreSQL personally but yes I think it does do it better. You install the updated version, in most cases the daemon will start and run against the old db, then you run mysql_upgrade.

          Comparing the instructions on:

          https://wiki.archlinux.org/title/MariaDB#Upgrade_databases_on_major_releases and https://wiki.archlinux.org/title/PostgreSQL#pg_upgrade is a pretty good summary of the differences.

    3. 3

      For a non-cli solution similar to –i-am-a-dummy mode, there is pg-safeupdate: https://github.com/eradman/pg-safeupdate