1. 18
  1.  

  2. 3

    As long as you don’t have hundreds of millions of jobs, yes. If you have that many jobs, things become interesting. It is very nice ot be able create jobs inside transactions.

    Using a varchar() type instead of text in PostgreSQL in 2019, hmmm.

    1. 3

      Using a varchar() type instead of text in PostgreSQL in 2019, hmmm.

      Hah, it’s funny to see this comment, I was just today wondering which one of these I should be choosing for a column that contains a URL fragment - eg oregon in /usa/oregon/portland. Being a newbie to Postgres (my SQL background is in MySQL) I have been choosing text essentially because of the name. Would you please expand on why text is the better choice than varchar in 2019?

      1. 3

        why text is the better choice than varchar in 2019

        It isn’t, they’re equivalent. TEXT is considered idiomatic in Postgres community, and that’s it.

    2. 1

      Didn’t know about “skip locked” yet. Interesting.

      1. 1

        We implement something similar with “skip locked” based on this - https://www.holistics.io/blog/how-we-built-a-multi-tenant-job-queue-system-with-postgresql-ruby/.

        The difference is that we’re polling postgres for new job instead of using subscribe/notify. But reading the comments on HN, it seems that even with using notify/subscribe you still need to poll on start to check for any unprocessed jobs (in the event where the worker crash when the job was submitted). So I don’t see much value yet to replace the polling.

        Our implementation is in python, using multiprocessing (pebble library) for the worker. The main motivation for us to implement it in postgresql (we’re using django-q with sqs before that) is to have rate limit per user, something I found lacking (or non-trivial) in many other job queues like celery.