1. 19
  1.  

  2. 6

    See also the linked Who Killed Prolog? article.

    1. 18

      I think they mean

      ?- killed(prolog, X).

      1. 6

        No

    2. 5

      The most confusing thing about Prolog is that, whatever algorithm you implement with it must be on top of the built-in ones, namely depth-first search, and unification (and only using recursion rather than iteration).

      But really you are not supposed to implement algorithms in Prolog. You are supposed to state truths and ask questions about them.

      1. 6

        That’s a tidy philosophy, but (in my very, very limited experience) if you want your nontrivial Prolog program to terminate before your body does, you need to do a little more algorithm-shaping, generally in terms of telling the interpreter when to stop searching a particular subtree.

        1. 3

          “A tidy philosophy” is a very nice way of saying that. You are right, but for what fits in the model, given enough smarts about constraints, you can do some impressive stuff that is not very algorithmic. At least that’s what I saw.

      2. 4

        The linked article on the demise of the Fifth Generation Project rang a bell. As a youngster I read with avid interest the book “The Fifth Generation: Artificial Intelligence and Japan’s Computer Challenge to the World”. I don’t remember anything of the book except a news paper clipping I had placed subsequently in it. The title was “Finis for the Fifth” and I don’t remember anything about the details of that either.

        But the whole thing seemed to materialize this notion that Japan had failed. There was a point when Japan was what China is today - the next rising power. It was going to take everything over, and this was just one of the things it was going to take over. Then something happened. I don’t know what. Possibly Japanese kids decided not to work as hard as their parents. But Japan was no longer this lean mean fighting machine that stereotypes in stories like “Rising Sun” portrayed.

        Most folks here no doubt think about this story in the context of the AI winter. I think of it in terms of planned economies and how they always get their ass handed to them by the rapid dynamics of a contemporary economy. Some others possibly think of this in the greater context of Japan’s stagflation both in economy and society.

        1. 8

          This is getting a bit off-topic, but: what happened to Japan is complicated, but kids not working as hard as their parents was not a contributor. Rather, it’s a combination many things, including the collapse of financial liquidity due to a mixture of fear of insolvency and actual insolvency; the shrinking size of Japan’s workforce; a fear of failure, which kept failing companies propped up; and a conviction of success, which meant that many Japanese companies didn’t stay on the game enough to compete. The X68000 and failure of Japan’s other indigenous computer tech was all part of that. (And that said, it’s worth keeping in mind that the culture that came out after that collapse produced things like Ruby, the Switch, etc.)

          1. 4

            What I saw in japan was, aside from a lot of very smart, creative, hard working people, a rigid business culture where smaller companies were required to act as captive contractors to the dominant companies - waiting for orders that were very elaborately specified. Also people who didn’t conform or were too young or too poorly connected seemed to have an exceptionally difficult time getting traction - even more than USA. That was at least in the corporate corner I got to see.

            1. 3

              I found this write-up about doing business in Japan to be really interesting. Also, pretty sure I would never want to work there if that would be my experience. Certain kinds of people would probably really like it, though, if setup with the right job and company.

              1. 5

                I had a friend who became fluent in japanese and worked for a pretty traditional company there for a couple of years. I asked her how she avoided spending a lot of time serving tea at meetings and she grinned and explained how much ceramic had “accidently” broken the only time they asked her to do this job.

                1. 2

                  She sounds awesome haha.

          2. 7

            The US rise in computer technology was based in a large part on government investment (e.g. planned economy) : DARPA, NSF, DOE, ONR, MCC, Bell Labs (essentially a government agency), …. There was a lot of mythologizing about MITI, but DOD alone has an enormous guiding effect on the US economy - and was the dominant market in semiconductors for a long time.

            The problem in Japan was, I think, related to its small size and to lack of any anti-trust regulation to open up some competition. On the other hand, while Japan has not conquered the world, it is a very prosperous country that still produces a lot of innovation.

            1. 2

              Especially the Strategic Computing Initiative. That funded all kinds of tech for a long time.

            2. 1

              Japan is doing very well. It didn’t live up to hype, but then hype was unreasonable and no one could live up to that.

            3. 2

              Datalog has a lot of really nice properties:

              • not much incidental ordering = easy to compile efficiently and parallelize
              • normalized data = no messing around with data-structures
              • static data-flow = easier to read / debug

              But it really struggles with abstracting many common programming patterns.

              Prolog/mercury/kanren et al try to be more expressive by adding data-structures and unification, but this breaks all of the nice properties above. The first tutorial for most of these ‘relational’ languages is writing a linear search over a linked list - not a relation in sight!

              I think there would be a lot of value to a relational language that manages to provide a reasonable level of abstraction while remaining disorderly and normalized.

              1. 2

                Killed in a combinatory explosion.

                1. 2

                  Assisted proof search similar to Prolog is powering coq. It’s a bit steep to claim that logic programming would have failed.

                  Myself I think that imperative programming and object oriented programming have failed. You can spend decade into these things yet still be incapable of writing the simplest programs that’d be correct or wouldn’t deadlock.

                  1. 1

                    ATS is imperative and you can write extremely correct programs.

                    The issue is the applicability of some type systems. Subtyping in particular does not offer enough “constraints” to apply to a program to make it super correct. You must combine a few type theories together and then you can have extremely correct software.

                    Another issue with subtyping, is there are cases where composition is not trivial or straight up cant be done.

                    Composition in particular is extremely useful because you can build infinitely complex types. So if your type theory supports it 100% then the sky is the limit. There’s probably some lambda calculus theory proof out there that supports this.