1. 14
  1.  

  2. 12

    I have no comments on the content, but I’m rolling my eyes at how the awful Twitter symptom of chopping up one’s text into little pieces is being brought over to Mastodon where it’s much less necessary.

    IIRC Mastodon still has a character limit, but it’s significantly higher (500 chars?) I guess the devs thought to themselves “Huh, for some reason Twitter has a really low character limit; they must have had a good reason for that so let’s have one too, only let’s not make it quite as terrible.” Which I would call cargo-cult architecture design.

    (Aren’t you glad lobste.rs didn’t force me to post this as two comments?)

    1. 6

      It’s a per-instance setting. Default in Mastodon is 500 characters (and the lead developer refuses to make it a parameter), the instance I’m on does 5000 characters, other fediverse software has other limits.

      1. 3

        Counterpoint: what’s wrong with having a permalink attached to a logical subset of a longer document, i.e. a couple of paragraphs? Way back there was a brief fad for “purple links”, small hypertext anchors pointing to each specific paragraph in a blog post. This made it easier for commentators etc. to specifically target a paragraph if they wished to engage with the argument or praise a turn or phrase.

        Same thing with Twitter/fedi threads. You can comment on a specific part, or a specific part can get called out in a “viral” manner.

        1. 1

          Counterpoint: what’s wrong with having a permalink attached to a logical subset of a longer document, i.e. a couple of paragraphs?

          The webpage it’s on usually has massive gaps between each paragraph, and messed up scrolling (scrolls per ‘logical subset’) that jumps me to the bottom or back to the top. A normal blog-type post doesn’t have this problem.

          Abstractly, this problem isn’t fundamental. Practically, who cares?

      2. 8

        I was confused reading this until I realized the author is using an unusual and non-standard meaning of “rationalist”.

        A rationalist, in ordinary use, is someone who uses evidence and reasoning to make decisions that don’t have clearly defined right or wrong answers. Ask a rationalist to design a perfectly secure system and they’ll resign. Ask them to improve security and their decisions (while often unintuitive) will generally result in designs that are more resistant to malicious activity.

        The author is using “rationalist” to describe someone who applies an inflexible mechanical process, such as trying to design a secure system by enumerating every possible “source of insecurity” and then applying compensating controls. You’ll see a lot of this in auditor-driven security reports such as SOC 2.

        Not sure if the author is reading these comments (I don’t want to butt in on their thread directly), but if they are, it might be worth trying to rephrase their posts for clarity.

        1. 15

          I believe the author is using “rationalist” in the sense it has come to have in tech-y corners of the internet. Which is not someone who falls into the intellectual tradition of, e.g., Descartes, but rather a person enamored of “Bayesianism” and likely associated things like LessWrong, Effective Altruism/“longtermism”, Eliezer Yudkowsky, etc.

          It’s similar to how, say, “Java” in most contexts likely refers either to the island or to coffee, but in a tech-oriented forum almost certainly refers to the programming language and/or its associated ecosystem.

          1. 6

            I think it’s unreasonable to attempt a redefinition of a millenias-old philosophical tradition based on some obscure blog and the actions of someone known primarily for writing the world’s longest Harry Potter fanfiction.

            1. 9

              Whether you think it’s “unreasonable” isn’t really relevant – if you read a lot of tech and tech-adjacent forums, you’re going to find that in those forums, “rationalist” refers to the modern-day Bayesian people almost every time.

              1. 6

                if you read a lot of tech and tech-adjacent forums,

                I do.

                you’re going to find that in those forums, “rationalist” refers to the modern-day Bayesian people almost every time.

                This is not my experience.

                1. 5

                  This is not my experience.

                  This is simply the day you learn about it. No shame about it. Some day in your life you didn’t know about Coke.

                  But yes “rationalists” on hackernews or here often* refers to something different than the classical meaning you might learn in a logic or philosophy course.

                  *not always

                  1. 9

                    I’ve been an active poster on HN since 2009, and older communities (Slashdot, etc) before then.

                    I’m not trying to be rude here, but I think you’re significantly over-estimating how many people (even on “tech-y corners of the internet”) have even heard of LessWrong. That population is almost certainly orders of magnitude smaller than the people who took several quarters of logic and/or philosophy as part of their undergrad degree.

                    If you want to claim there’s a set of very young (or very parochial) folks in a Twitter/Tumblr bubble who think “rationalists” refer to fans of a particular blog then I’ll accept that, but I would also say that’s all the more reason to not put any weight on their opinions.

                  2. 3

                    Fully agreed, I’ve never stumbled over this term as used in such a way, so I’m siding with unreasonable :P

                  1. 5

                    Oh my god.

                    I love that you have citations for this but … oh my god.

                    I knew HPOR is longer than War and Peace but I had never expected someone would write three and a half million words of fanfic about anything, much less Harry Potter.

                    A Second Chance by Breanie

                    Part 2 of The Kismet Trilogy

                    Three and a half million words and it’s part of a trilogy!

                    I can’t handle this right now, or maybe ever.

                    1. 2

                      I mean… it’s been 6,400 business days since HP1 was released, and the author would have had to average 560 words a day to reach that number. Hardly impossible, especially for a creative person.

                    2. 3

                      Jeepers, you have to get to page 6 in the AO3 listing to hit 660k words. Fanfic is such an amazing thing (I don’t partake, I just find this kind of remixing culture interesting).

                  2. 6

                    Of course, the rationalists (LessWrong, EA, Yudkowsky) are against enumerating modelable threats and assuming you thus have the problem handled, certainly in the case of AI. A big point is that we have to account for the fact that we can be hideously limited in our understanding of a problem.

                    (Like, Eliezer is all about “navigating a sea of unknown unknowns”.)

                    That said, if the author is calling out LW specifically, unless this is about AI alignment, I have no idea what they are talking about. The LW “canon” and community has no particular opinion on infosec.

                    (I also don’t know what Chapman is talking about. He doesn’t seem like he’s directly responding to anything Eliezer or anyone on LW said. Though I do actually, as a LWite, think he’s wrong on his own merits.)

                    1. 3

                      It’s similar to how, say, “Java” in most contexts likely refers either to the island or to coffee, but in a tech-oriented forum almost certainly refers to the programming language and/or its associated ecosystem.

                      No, it’s not similar to that. “Java” has multiple meanings, but these meanings are not competing, e.g. “learning Java” or “drinking Java” clearly refer to different kinds of Java, and it is clear from context which one you mean. This is different from saying things like “I’m a rationalist”, “this is a rationalist view”, “rationalists are wrong” and the listener assuming a different kind of rationalist than what you meant.

                      1. 4

                        Perhaps a better analogy is when C.S. Peirce found the term “pragmatism” being co-opted and used by others – particularly William James – in ways he didn’t like. People “stealing” established terms like this is not exactly a new phenomenon and, like I pointed out, in tech and tech-adjacent forums it’s almost always safe to assume “rationalist” means the Bayesian/Yudkowsky people.

                        (Peirce worked around it by renaming his philosophy “pragmaticism”, claiming the word was too ugly for anyone else to want to take it from him)

                      2. 2

                        Not sure what you’re about here. First, LessWrong doesn’t call it’s thing “rationalism”, it calls it “rationality”. I think the slight change in terminology is important, since it helps highlight that it’s an explicit departure from previous, similar looking traditions.

                        Second, there’s a reason people become enamoured with Bayesianism: probability theory is correct. See the first 2-3 chapters of E. T. Jaynes’ Probability Theory: the Logic of Science for more details, but the crux is: probability theory requires surprisingly few axioms, that would be unreasonable to deny.

                        Now the problem with probability theory is (and any one sufficiently familiar with it knows it), it’s also intractable. Correctly reasoning about anything complex enough brings about unmanageable combinatorial explosion, and we end up dealing with worse than NP complete problems all the time. In most cases we can only apply an approximation of probability theory, and that’s clearly incorrect. Our best hope is to get close enough, and to my knowledge we don’t yet have a theory of how to “correctly” approach correct reasoning.

                        Kind of frustrating, really: we have this beautiful Theory of Truth, that crushes our dreams of ever reaching absolute certainty. But at least we can get close enough, right? But then computational theory crushes our dreams again by pointing out we cannot apply our beautiful Theory of Truth in the first place.

                        I believe this is why LessWrong is called “LessWrong” instead of “OvercomingBias” (that was on purpose) or “MoreRight”. We cannot completely overcome biases, and forget about being right. The best we can hope for is making fewer mistakes than the other folks.

                    2. 3

                      I’ve long thought that modern technology consists of three areas: 1. Do something for me (CPU cycles) 2. Store something for me (Memory, and 3. Talk to somebody else (communications)

                      These three areas should be part of the metal and run on isolated systems. At that point we can talk about vendors, approaches, and strategies for each system. Until then, though, we seem to have stuck all of our eggs into one basket. To me it’s much too difficult to reason about them all at once as this author does. It quickly runs to extremes like nihilism. There’s a structural reason why that happens (and doesn’t have to)

                      1. 2

                        This reasoning left me a little confused, how sprinkling in some best practices (e.g. TLS) would have anything to do with known unknowns or not. Sure it’s a good idea, but I feel it’s misplaced in this argument. It doesn’t matter if I have an XSS on http or https. You could only make it worse (cf CORS)