1. 47
  1.  

    1. 12

      This is one of the best resources on modern Prolog. It is in fact the book that we recommend on the Scryer Prolog homepage (also, some examples are already adapted to Scryer since Markus Triska, the author is also a developer of the system). The videos are very useful too, sometimes even deeper.

    2. 3

      I always asked myself, ever since i got introduced to prolog at the early stages of my university module theoretical computer science and abstract datatypes - what would i use prolog for and why would i use it for that?

      1. 6

        I know two use cases where prolog is used earnestly, both deprecated these days:

        1. Gerrit Code Review allowed creating new criteria a change must fulfill before it can be submitted. Examples

        2. SPARK, an Ada dialect, used prolog up to SPARK2005 (paper). Out of the formal verification annotations and the Ada code it created prolog facts and rules. With those, certain queries passed if (and only if) the requirements encoded in those annotations were satisfied. They since moved to third party SAT solvers, which allowed them to increase the subset of Ada that could be verified (at the cost of being probabilistic, given that SAT is NP-complete: a true statement might not be verified successfully, but a false statement never passes as true), so prolog is gone.

        1. 6

          Datalog, which is essentially a restricted Prolog, has made a bit of a resurgence in program verification. There are some new engines like Soufflé designed specifically by/for that community. Not an example of Prolog per se, but related to your point 2.

          1. 3

            Yes, Datalog is heavily used for building state-of-the-art static analyzers, see https://arxiv.org/abs/2012.10086.

        2. 1

          This is a great comment, I never knew about the Gerrit code review criteria.

      2. 5

        A way I’ve been thinking about it is what if my database was more powerful & less boilerplate.

        A big one for me is why can’t I extend a table with a view in the same way prolog can share the same name for a fact & a predicate/rule. Querying prolog doesn’t care about if what im querying comes from a fact (table) or a predicate (view).

        This in practice i think would enable a lot of apps to move application logic into the database, I think this is a great thing.

        1. 2

          move application logic into the database, I think this is a great thing

          The industry as a whole disagrees with this vehemently. I’m not sure if you were around for the early days of RDBMS stored procedure hell, but there’s a reason they’re used fairly infrequently.

          1. 2

            What went wrong with them?

            1. 2

              It’s nearly impossible to add tests of any kind to a stored procedure is the biggest one, IMO.

              1. 3

                We actually do stored procedures at work & test them via rspec but it sucks. Versioning them also sucks to deal with. And the language is terrible from most perspectives, i think primarily it sucks going to a LSP-less experience.

                I think to me though the root suckiness is trying to put a procedural language side by side a declarative one.

                This wasn’t what I was saying with views.

                I do think databases could be more debuggable & prolog helps here because you can actually debug your queries with breakpoints and everything. Wish i could do that with sql.

                EDIT: but we continue to use stored procedures (and expand on them) because its just so much faster performance-wise than doing it in rails, and I don’t think any server language could compete with doing analysis right where the data lives.

                1. 3

                  Stored procedures can absolutely be the correct approach for performance critical things (network traversal is sometimes too much), but it also really depends. It’s harder to scale a database horizontally, and every stored procedure eats CPU cycles and RAM on your DB host.

                  I agree, prolog != SQL and can be really nice which may address many of the issues with traditional RDBMS stored procedures.

                  I do think databases could be more debuggable & prolog helps here because you can actually debug your queries with breakpoints and everything. Wish i could do that with sql.

                  Yeah. DBs typically have pretty horrible debugging experiences, sadly.

          2. 2

            I feel that that this is a very US-coastal point of view, like one that is common at coastal start-ups and FAANG companies but not as common elsewhere. I agree with it for the most part, but I suspect there are lots of boring enterprise companies, hospitals, and universities, running SQL Server / mostly on Windows or Oracle stacks that use the stored procedure hell pattern. I would venture that most companies that have a job title called “DBA” use this to some extent. In any case I think it’s far from the industry as a whole

            1. 1

              Nah, I started my career out at a teleco in the Midwest, this is not a SV-centric opinion, those companies just have shit practices. Stored procedures are fine in moderation and in the right place, but pushing more of your application into the DB is very widely considered an anti-pattern and has been for at least a decade.

              To be clear, I’m not saying using stored procedures at all is bad, the issue is implementing stuff that’s really data-centric application logic in your database is not great. To be fair to GP, they were talking about addressing some of the things that make approaching thing that way suck

          3. 2

            The industry as a whole disagrees with this vehemently. I’m not sure if you were around for the early days of RDBMS stored procedure hell, but there’s a reason they’re used fairly infrequently.

            @ngp, but I think you are interpreting

            … in the same way prolog can share the same name for a fact & a predicate/rule. Querying prolog doesn’t care about if what im querying comes from a fact (table) or a predicate (view).

            somewhat narrowly.

            Sure we do not want stored procs, but moving Query complexity to a database (whether it is an in-process-embedded database, or external database) is a good thing.

            Queries should not be implemented manually using some form of a ‘fluent’ APIs written by hand. This is like writing assembler by hand, when optimizing compilers exists and work correctly.

            These kinds of query-by-hand implementations within an app, often lack global optimization opportunities (for both query and data storage). If these by-hand implementations do include global optimizations for space and time - then they are complex, and require maintenance by specialized engineers (and that increases overall engineering costs, and may make existing system more brittle than needed).

            Also, we should be using in-process databases if the data is rather static, and does not need to be distributed to other processes (this is well served by embedding prolog)

            Finally, prolog-based query also includes defining ‘fitment tests’ declaratively. Then prolog query responds finding existing data items that ‘fits’ the particular fitment tests. And that’s a very valuable type of query for applications that need to check for ‘existence’ of data satisfying a set of, often complex, criteria.

          4. 1

            Databases can also be more difficult to scale horizontally. It can also be more expensive if you’re paying to license the database software (which is relatively common). I once had the brilliant idea to implement an API as an in-process extension to the DB we were using. It was elegant, but the performance was “meh” under load, and scaling was more difficult since the whole DB had to be distributed.

      3. 4

        I have a slightly different question: does anybody use prolog for personal computing or scripts? I like learning languages which I can spin up to calculate something or do a 20 line script. Raku, J, and Frink are in this category for me, all as different kinds of “supercalculators”. Are there one-off things that are really easy in Prolog?

        1. 2

          I’d say anything that solves “problems” like Sudoku or these logic puzzles I don’t know the name of “Amy lives in the red house, Peter lives next to Grace, Grace is amy’s grandma, the green house is on the left, who killed the mayor?” (OK, I made the last one up).

          When I planned my wedding I briefly thought about writing some Prolog to give me a list of who should sit at which table (i.e. here’s a group of 3, a group of 5, a group of 7 and the table sizes are X,Y,Z), but in the end I did it with a piece of paper and bruteforcing by hand.

          I think it would work well for class schedules, I remember one teacher at my high school had a huge whiteboard with magnets and rumour was he locked himself in for a week before each school year and crafted the schedules alone :P

          The “classical” examples in my Prolog course at uni were mostly genealogy and word stems (this was in computer linguistics), but I’m not sure if that would still make sense 20y later (I had a feeling in this particular course they were a bit behind the time even in the early 00s).

          1. 2

            these logic puzzles I don’t know the name

            This class of problems has a few different names: https://en.wikipedia.org/wiki/Zebra_Puzzle

          2. 1

            Wonder if it would be good for small but intricate scheduling problems, like vacation planning. I’d compare with minizinc and z3.

            1. 1

              I’d be interested to see a comparison like this. I don’t really know z3, but my impression is that you typically call it as a library from a more general-purpose language like Python. So I imagine you have to be aware of how there are two separate languages: z3 values are different than Python native values, and some operations like if/and/or are inappropriate to use on z3 values because they’re not fully overloadable. (Maybe similar to this style of query builder.)

              By contrast, the CLP(Z) solver in Prolog feels very native. You can write some code thinking “this is a function on concrete numbers”, and use all the normal control-flow features like conditionals, or maplist. You’re thinking about numbers, not logic variables. But then it works seamlessly when you ask questions like “for which inputs is the output zero?”.

      4. 4

        It’s really good for parsing thanks to backtracking. When you have configuration and need to check constraints on it, logic programming is the right tool. Much of classical AI is searching state spaces, and Prolog is truly excellent for that. Plus Prolog’s predicates are symmetric as opposed to functions, which are one way, so you can run them backwards to generate examples (though SMT solvers are probably a better choice for that today).

        1. 3

          Prolog is both awesome and terrible for parsing.

          Awesome: DCGs + backtracking are a killer combo

          Terrible: If it fails to parse, you get a “No”, and nothing more. No indication of the row, col, falied token, nothing.

          1. 2

            That subjectively resembles parser combinator libraries. I guess if you parse with a general-purpose language, even if the structure of your program resembles the structure of your sentences, you give up on getting anything for free; it’s impossible for a machine to say “why” an arbitrary program failed to give the result you wanted.

          2. 1

            You can insert cuts to prevent backtracking past a certain point and keep a list of the longest successful parse to get some error information, but getting information about why the parse failed is hard.

            1. 4

              And then your cuts are in the way for using the parser as a generator, thus killing the DCG second use.

      5. 3

        I have used it to prototype solutions when writing code for things that don’t do a lot of I/O. I have a bunch of things and I want a bunch of other things but I’m unsure of how to go from one to the other.

        In those situations it’s sometimes surprisingly easy to write the intermediary transformations in Prolog and once that works figure out “how it did it” so it can be implemented in another language.

        Porting the solution to another language often takes multiple times longer than the initial Prolog implementation – so it is really powerful.

      6. 2

        You could use it to define permissions. Imagine you have a web app with all kinds of rules like:

        • students can see their own grades in all courses
        • instructors and TAs can see all students’ grades in that course
        • people can’t grade each other in the same semester (or form grading cycles)

        You can write down each rule once as a Prolog rule, and then query it in different ways:

        • What grades can Alice see?
        • Who can see Bob’s grade for course 123, Fall 2020?

        Like a database, it will use a different execution strategy depending on the query. And also like a database, you can separately create indexes or provide hints, without changing the business logic.


        For a real-world example, the Yarn package manager uses Tau Prolog–I think to let package authors define which version combinations are allowed.

      7. 2

        When you have an appreciable level of strength with Prolog, you will find it to be a nice language for modeling problems and thinking about potential solutions. Because it lets you express ideas in a very high level, “I don’t really care how you make this happen but just do it” way, you can spend more of your time thinking about the nature of the model.

        There are probably other systems that are even better at this (Alloy, for instance) but Prolog has the benefit of being extremely simple. Most of the difficulty with Prolog is in understanding this.

        1. 3

          That hasn’t been my experience (I have written a non-trivial amount of Prolog, but not for a long time). Everything I’ve written in Prolog beyond toy examples has required me to understand how SLD derivation works and structure my code (often with red cuts) to ensure that SLD derivation reaches my goal.

          This is part of the reason that Z3 is now my go-to tool for the kinds of problems where I used to use Prolog. It will use a bunch of heuristics to find a solution and has a tactics interface that lets my guide its exploration if that fails.

          1. 1

            I don’t want to denigrate you, but in my experience, the appearance of red cuts indicates deeper problems with the model.

            I’m glad you found a tool that works for you in Z3, and I am encouraged by your comment about it to check it out soon. Thank you!

            1. 1

              I don’t want to denigrate you, but in my experience, the appearance of red cuts indicates deeper problems with the model.

              I’m really curious if you can point me to a largish Prolog codebase that doesn’t use red cuts. I always considered them unavoidable (which is why they’re usually introduced so early in teaching Prolog). Anything that needs a breadth-first traversal, which (in my somewhat limited experience) tends to be most things that aren’t simple data models, requires red cuts.

              1. 2

                Unfortunately, I can’t point you to a largish Prolog codebase at all, let alone one that meets certain criteria. However, I would encourage you to follow up on this idea at https://swi-prolog.discourse.group/ since someone there may be able to present a more subtle and informed viewpoint than I can on this subject.

                I will point out that the tutorial under discussion, The Power of Prolog, has almost nothing to say about cuts; searching, I only found any mention of red cuts on this page: https://www.metalevel.at/prolog/fun, where Markus is basically arguing against using them.

        2. 2

          Because it lets you express ideas in a very high level, “I don’t really care how you make this happen but just do it” way, you can spend more of your time thinking about the nature of the model.

          So when does this happen? I’ve tried to learn Prolog a few times and I guess I always managed to pick problems which Prolog’s solver sucks at solving. And figuring out how to trick Prolog’s backtracking into behaving like a better algorithm is beyond me. I think the last attempt involved some silly logic puzzle that was really easy to solve on paper; my Prolog solution took so long to run that I wrote and ran a bruteforce search over the input space in Python in the time, and gave up on the Prolog. I can’t find my code or remember what the puzzle was, annoyingly.

          I am skeptical, generally, because in my view the set of search problems that are canonically solved with unguided backtracking is basically just the set of unsolved search problems. But I’d be very happy to see some satisfying examples of Prolog delivering on the “I don’t really care how you make this happen” thing.

          1. 3

            As an example, I believe Java’s class loader verifier is written in prolog (even the specification is written in a prolog-ish way).

            1. 1

              class loader verifier? What a strange thing to even exist… but thanks, I’ll have a look.

              1. 3

                How is that strange? It verifies that the bytecode in a function is safe to run and won’t underflow or overflow the stack or do other illegal things.

                This was very important for the first use case of Java, namely untrusted applets downloaded and run in a browser. It’s still pretty advanced compared to the way JavaScript is loaded today.

                1. 1

                  I mean I can’t know from the description that it’s definitely wrong, but it sure sounds weird. Taking it away would obviously be bad, but that just moves the weirdness: why is it necessary? “Give the attacker a bunch of dangerous primitives and then check to make sure they don’t abuse them” seems like a bad idea to me. Sort of the opposite of “parse, don’t verify”.

                  Presumably JVMs as originally conceived verified the bytecode coming in and then blindly executed it with a VM in C or C++. Do they still work that way? I can see why the verifier would make sense in that world, although I’m still not convinced it’s a good design.

                  1. 2

                    You can download a random class file from the internet and load it dynamically and have it linked together with your existing code. You somehow have to make sure it is actually type safe, and there are also in-method requirements that have to be followed (that also be type safe, plus you can’t just do pop pop pop on an empty stack). It is definitely a good design because if you prove it beforehand, then you don’t have to add runtime checks for these things.

                    And, depending on what you mean by “do they still work that way”, yeah, there is still byte code verification on class load, though it may be disabled for some part of the standard library by default in an upcoming release, from what I heard. You can also manually disable it if you want, but it is not recommended. But the most often ran code will execute as native machine code, so there the JIT compiler is responsible for outputting correct code.

                    As for the prolog part, I was wrong, it is only used in the specification, not for the actual implementation.

                    1. 1

                      You can download a random class file from the internet and load it dynamically and have it linked together with your existing code. You somehow have to make sure it is actually type safe, and there are also in-method requirements that have to be followed (that also be type safe, plus you can’t just do pop pop pop on an empty stack). It is definitely a good design because if you prove it beforehand, then you don’t have to add runtime checks for these things.

                      I think the design problem lies in the requirements you’re taking for granted. I’m not suggesting that just yeeting some untrusted IR into memory and executing it blindly would be a good idea. Rather I think that if that’s a thing you could do, you probably weren’t going to build a secure system. For example, why are we linking code from different trust domains?

                      Checking untrusted bytecode to see if it has anything nasty in it has the same vibe as checking form inputs to see if they have SQL injection attacks in them. This vibe, to be precise.

                      …Reading this reply back I feel like I’ve made it sound like a bigger deal than it is. I wouldn’t assume a thing was inherently terrible just because it had a bytecode verifier. I just think it’s a small sign that something may be wrong.

                      1. 1

                        Honestly, I can’t really think of a different way, especially regarding type checking across boundaries. You have a square-shaped hole and you want to be able to plug there squares, but you may have gotten them from any place. There is no going around checking if random thing fits a square, parsing doesn’t apply here.

                        Also, plain Java byte code can’t do any harm, besides crashing itself, so it is not really the case you point at — a memory-safe JVM interpreter will be memory-safe. The security issue comes from all the capabilities that JVM code can access. If anything, this type checking across boundaries is important to allow interoperability of code, and it is a thoroughly under-appreciated part of the JVM I would say: there is not many platforms that allow linking together binaries type-safely and backwards compatibly (you can extend one and it will still work fine).

                      2. 1

                        Honestly, I can’t really think of a different way, especially regarding type checking across boundaries. You have a square-shaped hole and you want to be able to plug there squares, but you may have gotten them from any place. There is no going around checking if random thing fits a square, parsing doesn’t apply here.

                        Also, plain Java byte code can’t do any harm, besides crashing itself, so it is not really the case you point at — a memory-safe JVM interpreter will be memory-safe. The security issue comes from all the capabilities that JVM code can access. If anything, this type checking across boundaries is important to allow interoperability of code, and it is a thoroughly under-appreciated part of the JVM I would say: there is not many platforms that allow linking together binaries type-safely and backwards compatibly (you can extend one and it will still work fine).

                  2. 1

                    Well, how is this different from downloading and running JS? In both cases it’s untrusted code and you put measures in place to keep it from doing unsafe things. The JS parser checks for syntax errors; the JVM verifier checks for bytecode errors.

                    JVMs never “blindly executed” downloaded code. That’s what SecurityManagers are for. The verifier is to ensure the bytecode doesn’t break the interpreter; the security manager prevents the code from calling unsafe APIs. (Dang, I think SecurityManager might be the wrong name. It’s been soooo long since I worked on Apple’s JVM.)

                    I know there have been plenty of exploits from SecurityManager bugs; I don’t remember any being caused by the bytecode verifier, which is a pretty simple/straightforward theorem prover.

          2. 2

            In my experience, it happens when I have built up enough infrastructure around the model that I can express myself declaratively rather than procedurally. Jumping to solving the problem tends to lead to frustration; it’s better to think about different ways of representing the problem and what sorts of queries are enabled or frustrated by those approaches for a while.

            Let me stress that I think of it as a tool for thinking about a problem rather than for solving a problem. Once you have a concrete idea of how to solve a problem in mind—and if you are trying to trick it into being more efficient, you are already there—it is usually more convenient to express that in another language. It’s not a tool I use daily. I don’t have brand new problems every day, unfortunately.

            Some logic puzzles lend themselves to pure Prolog, but many benefit from CLP or CHR. With logic puzzles specifically, it’s good to look at some example solutions to get the spirit of how to solve them with Prolog. Knowing what to model and what to omit is a bit of an art there. I don’t usually find the best solutions to these things on my own. Also, it takes some time to find the right balance of declarative and procedural thinking when using Prolog.

            Separately, being frustrated at Prolog for being weird and gassy was part of the learning experience for me. I suppose there may have been a time and place when learning it was easier than the alternatives. But it is definitely easier to learn Python or any number of modern procedural languages, and the benefit seems to be greater due to wider applicability. I am glad I know Prolog and I am happy to see people learning it. But it’s not the best tool for any job today really—but an interesting and poorly-understood tool nonetheless.

      8. 1

        I have an unexplored idea somewhere of using it to drive the logic engine behind an always on “terraform like” controller.

        Instead of defining only the state you want, it allows you to define “what actions to do to get there”, rules of what is not allowed as intermediary or final states and even ordering.

        All things that terraform makes hard rn.

      9. 1

        Datalog is used for querying some databases (datomic, logica, xtdb). I think the main advantages claimed over SQL are that its simple to learn and write, composable, and some claims about more efficient joins which I’m skeptical about.

        https://docs.datomic.com/pro/query/query.html#why-datalog has some justifications of their choice of datalog.

        Datomic and XTDB see some real world use as application databases for clojure apps. Idk if anyone uses logica.

    3. 2

      I’m still thinking of how to combine prolog and ui. I think it’d be really damn cool.

      “What UI components are rendered when the user state is XYZ?”

      1. 3

        The last time I used Prolog for GUI things (20ish years ago), it was quite a clean integration. It was basically a reactive programming model, long before that became fashionable. Events were modelled as predicates being asserted, which triggered a rederivation of everything that relied on that predicate.

        1. 1

          Do you have any examples? Because seriously, that sounds like we missed a major damn lineage of greatness.

      2. 1

        I’ve also been on this train.

        Combine this with some of the recent work of local-first (putting a persistent db on frontend)* it feels like significant parts of apps could be prolog in not too distant future.

      3. 1

        Presumably @Bunny351 has thought about this given the UI support in FLENG?