1. 5
  1. 2

    It’s pretty presumptuous to refer to a vague unpublished blog post as “this paper”. The terms used are ill-defined and the “conclusions” are just vague speculation.

    1. 1

      unpublished blog post as “this paper”

      There’s also a PDF link at the top if you prefer.

      Or how about LaTex? (https://github.com/treenotation/research/blob/master/papers/paper3/countingComplexity.tex)

      Source code is there as well, as well as change history (between there and the “jtree” repo).

      The terms used are ill-defined

      Specifically what is ill defined?

      There’s also reams of data and experimental tools available (https://jtree.treenotation.org/designer/) (https://jtree.treenotation.org/sandbox/).

      and the “conclusions” are just vague speculation.

      “The benefit of this system is that it is simple, practical, universal, and scale free”—are being put into use everyday. Not sure what is vague about that. Maybe you haven’t played around with it yet? I use TNC all. the. time. It is immensely simple, practical, universal and scale free. What complexity measurements are you using throughout your day?


      I’m also told that having references pointing to SciHub is inappropriate. Frankly I disagree. I think it’s downright dishonorable to do things according to what is deemed “permitted”, if such things are plainly unjust (like restricting people from sharing science and information under the threat of violence). So I’ll put no effort into getting this “published”, unless it’s in a public domain journal. I did indeed “publish” all 3 papers to arxiv.org (the first is still up, but they went and removed the 2nd and 3rd ones for some reason).

      I checked out some of your work. Looks interesting! But wait, when I go to see the source code I get this:

      “(authentication required in all cases; reviewers will need to use the git.ligo.org or IceCube SVN links).”

      My advice: don’t do what the field is doing, those things deemed “appropriate”, and do the right thing.

    2. 2

      I get a very distinct Kolmogorov complexity vibe from this. Which is to say that it’s a neat concept but ultimately doomed to fail due to its own uncomputability.

      1. 1

        Gulp. Any more thoughts/pointers that you think might be relevant?

        I have the feeling that it will succeed where Kolmogorov and others haven’t caught on because it is computable (in a practical sense, not in a violation of Gödel sense). However, I have not yet proved this, after many attempts of starting backwards from “print hello world”, traversing the hills of IRs and the gates of logic to the mythical lands of binaries, etc, trying to come up with a continuous beautiful molecule.

        1. 2

          Sadly, I’m pretty sure you’re out of luck.

          Take any “irreducible” base set for TNC. Then TNC is nothing else than a programming language in which we compare program complexities by some length measure, and Kolmogorov has shown that this measure must be uncomputable.

          I’m not sure what you mean by “in a practical sense”. It is true that a concrete program has a well-defined complexity, however what we are really interested is the shortest description of a concept, for example the shortest program to produce “Hello, world!”. After all, a program that first calculates a million digits of pi and then prints “Hello, world!” would be much longer (and more complex) than if we skipped calculating pi. Since we are really interested in printing “Hello, world!”, the shortest program seems like a natural choice.

      2. 2

        Under this model, which of these two strings is more complex?

        • 861222687548427914143377067669
        • 0123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899
        1. 1

          Given only this information, the latter has more relative complexity, simply because it is longer. They both have a node and word count of 1 but latter has a bigger cell size (190/cell size of a letter, which we can make an educated guess is the same for both trees here with a 4 bit domain).

          Let’s snip the 2nd example to do another comparison:

          • 861222687548427914143377067669
          • 012345678910111213141516171819

          Now which is more complex? Given only this information, they have the same relative complexity but the latter probably has more absolute complexity. The first one I can generate using just definitions of binary and then generating a base 10 character set and then picking an arbitrary 30 characters from that character set.

          For the second though, while I could do the same as the first, more likely I’m adding some intermediate trees which create a concept of order, and then I’m using those trees to generate this second sequence. Hence the total amount of trees in the second example would be greater.