1.  

    Work: Reading materials on marketing and consulting to pitch to companies better. Doing a bunch of revisions of the 3 day TLA+ workshop to make it simpler and cover more. Rehearsing a beta of my CodeMesh talk.

    Fun: Trying to find some knitting classes in Chicago. Figuring out what kind of volunteering to do.

    1. 7

      Most of these no longer apply, of course, as the hacker test was published almost 30 years ago. But that makes it really interesting to me as a historical document of what was part of the “hacker” culture. So I think it would be fun to start documenting all of the references in test. I started the project here, feel free to contribute!

      1. 2

        This is beautiful.

        I remember taking this test back in the early 90’s after getting it off netnews and it’s always been a fond memory of mine. I’ll see what I can do about documenting what I can.

        1. 1

          That’s a good idea. We could also do a modern one or several for different niches. Could start it with questions on this that still apply.

        1. 2

          Every year I tell myself “this is the year I finally do my NaNoGenMo idea” and every year I decide against it. I’m not gonna let another year slip by like this, so:

          If I don’t have a complete NaNoGenMo done by November 30, I’m donating $100 to the EFF.

          Y’all hold me accountable plz

          1. 1

            Could backfire; now all fans of the EFF have a monetary incentive to sabotage your project!

          1. 63

            First of all, equating “points” with “value” is a big and common fallacy you should reject, but I think other people will cover that much more eloquently than I can. So, my “one thing”:

            Slow down.

            Plan ahead before diving in. Take the time to build infrastructure. Do research, do cost analysis, do risk assessment. Make decision tables and sketch out state charts. I’ve even started writing outlines of programs in a notebook before vimming out the code. As long as you’re thinking ahead.

            This helped me far more than any other advice I heard. I don’t write code as fast as I used to, but overall I’m a much more productive developer when I move slow.

            1. 29

              What was one thing that made you a faster/better developer?

              …working for a company (Amazon) that treated code like a liability and encouraged solving problems with the minimum amount of it - like preforming surgery with the minimum number of incisions.

              1. 4

                treat[…] code like a liability

                Preach.

                @pab: Go sit down with whoever does operations for your team/org/company and watch how they work. You’ll learn why those tools written in the 70s are still on your laptop today, and how you can use them to solve some problems much faster and more reliably than other developers on your team can :)

              2. 8

                Velocity is a by-product of quality. And quality only comes when you think deeply about what you’re building, which requires that you escape the frenetic move fast and break things ideology.

                Aim to write code that’s good enough not to return to unless requirements change. Because then you can focus deeply on the next thing at hand.

                1. 3

                  First of all, equating “points” with “value” is a big and common fallacy you should reject

                  Don’t tell me—tell my manager! 😉

                  In all seriousness, the things you are suggesting sound like they’re outside the scope of my job role (the things you listed are normally dictated down to me). Perhaps taking on more responsibility or pushing back may unlock some knowledge?

                1. 1

                  I’d also ask if you had any certifications in either. I’ve seen more emphasis on that in Other than in software.

                  Also, http://thecodelesscode.com/case/154

                  1. 2

                    Also, http://thecodelesscode.com/case/154

                    That’s exactly what I’m talking about. The author is not a civil engineer and is speculating on how people build bridges. I’ve only met one person with the relevant expertise and they thought that koan was cringingly wrong about how bridge building actually works.

                    Hence why I want to survey people who’ve been in both worlds and find out what the actual differences are.

                  1. 34

                    I’m a chocolatier. I buy couvature in bulk and make chocolates. I usually bring them to events and conferences for funsies. I currently have cherry cordials, taro truffles, and candied orange peels in my cupboard.

                    I do a lot of cooking in general, too.

                    My other big hobby is juggling. I can do a five ball cascade one time in three, but these days I’ve been slacking off on toss juggling in favor of cigar boxes.

                    I want to volunteer more and learn knitting, but haven’t really started doing either of those.

                    1. 6

                      Did you grow up with someone who was a chocolatier? How’d you get into it.

                      1. 4

                        Nah, I didn’t even know how to cook until I got to college. Then I got obsessed. One Christmas I was alone on campus, got cabin fever, and decided I was gonna learn confectionary. Been doing it ever since!

                      2. 6

                        I’m a chocolatier. I buy couvature in bulk and make chocolates. I usually bring them to events and conferences for funsies. I currently have cherry cordials, taro truffles, and candied orange peels in my cupboard.

                        Oh that sounds like a lot of fun. I didn’t think of that as a hobby, any tips for anyone who wants to get started as an amateur? Recommended books or websites?

                        (I would just like to try this once, sounds like a nice activity to do with a kid as well!)

                        1. 9

                          I did a writeup here last time this question was asked!

                          1. 1

                            Thanks for the link and writeup!

                        2. 4

                          Ever since @JordiGH mentioned knitting, I’ve been wanting to join ravelry and learn to knit. I have yet to start as well.

                          1. 4

                            Do it! Yarn and needles are cheap to buy at your local craft mart, so it’s low cost to learn and find out if you like it. Ravelry has lots of patterns for basic coasters, which are small enough learn on quickly and finish a project quickly.

                          2. 3

                            Time to start making your own chocolate then. I use a Premier Chocolate Refiner.

                            1. 3

                              knitting

                              I love knitting, it keeps me sane. I have a pair of socks in my bag that have an easy pattern, so I have something to occupy my hands in meetings. And I work on more complex stuff in the evenings so my Youtube and Netflix downtime has something to show for itself.

                            1. 4

                              I think this something we can solve though (very gentle) shaming. Whenever someone leaves a comment that shows they didn’t read the article, I try to leave a comment saying “that’s explicitly covered in the article.”

                              1. 1

                                That’s a really good way, I’ve seen some of your comments already, but I must admit I was too thick to also recognize the gentle hint.

                              1. 3

                                https://simple.wikipedia.org/wiki/Turing_complete

                                Actual computers have to operate on limited memory and are not Turing complete in the mathematical sense. If they can run any program they are equivalent to linear bounded automata, a weaker theoretical machine. Informally, however, calling a computer Turing complete means that it can execute any algorithm.

                                The article makes the statement that:

                                Of course, on a real computer, nothing is Turing-complete, but C doesn’t even manage it in theory.

                                implying that there are languages that do manage this.

                                I don’t understand this. At some point all languages end up with the same representation (machine code).

                                Is the author’s statement equivalent to saying that C deals with pointers directly while other languages (like say Python) don’t?

                                Could some one given an example of a language that is theoretically Turing complete in the sense the author wants to imply?

                                Thanks!

                                1. 3

                                  I think any almost turing complete language that doesn’t specify the representation of objects should count. If the language spec doesn’t guarantee that any object can have its address taken with some finite number of bits, then it avoids this particular problem.

                                  (To clarify, “almost turing complete” is the informally specified class of things like C, that support all the goodies of turing machines, but don’t technically have unlimited memory. I do not know what kind of automata that actually is).

                                  1. 5

                                    I think C gives you a pushdown automata, as function call stack is unbounded.

                                    1. 3

                                      A two-stack pushdown automata is Turing complete, so my suspicion is that a one-stack + the rest of C might still be TC.

                                  2. 4

                                    So

                                    1. the claim is false. Real computers are equivalent to finite state machines, not linear bounded automata. This is trivial to show: run the classic LBA problem of checking to see if parenthesis match. There is some n so that after n (’s the program will be unable to record how many it has seen. QED

                                    Edit: I couldn’t help myself and went and fixed the Wikipedia page

                                    1. I think the meaning may be that some languages have, in theory, bignum types that are not finitely bound. Of course, C supports bignums from a library so it’s wrong on that too [edit for personal Coc] .
                                    1. 3

                                      Would there also be some FSMs they can’t simulate, because those FSMs have too many states?

                                      1. 1

                                        Sure, you can have a machine with 256^(2^64) + 1 states, but that just means you need a bigger computer. :)

                                        1. 1

                                          definitely.

                                        2. 2

                                          The argument TFA is making excludes “external” bignum libraries, since those are not part of the core language. If the libraries were implemented using pure standard C as part of our exercise, the arguments TFA makes would still apply (there are a finite number of pointers in C, every object can have a pointer obtained for it, and thus the total number of objects and thus distinct tape locations is finite).

                                          I believe I countered those arguments in my other comment, however, by using the stdio functions to emulate a Turing machine tape.

                                          1. 1

                                            I think the article is silly, but bignums don’t help. The claim as I understand it is that a Turing machine has a tape of infinite cells. It is an infinitely sized array, which C does not have. Even with a bignum implementation that can go to infinity (practically impossible), that’s the range of one value, not the number of different values. Turing machines have a finite range of values (symbols), too.

                                            1. 2

                                              If you have unbounded integers - bignums in some theory - then you have an infinite tape. Think of the tape as holding a string of bytes and if you can pull bytes off the bignum, you can read the tape to any position

                                              data_position(position, tape) { bignum_t position, tape; while(position > 0){ tape = tape /256 ; position–}; return tape % 256; }

                                              1. 2

                                                I was thinking about that, wasn’t immediately sure if the conversion was legit.

                                                But in any case, the same rule applies. The bignum is declared by the standard to be of finite size. Or a linked list of finite elements.

                                                1. 1

                                                  does the C standard say anything about bignums?

                                                  Anyways, it’s a silly game. Turing machines are intended to represent a model of what is “effectively computable” in the the mathematical sense, which is only loosely connected to what is computable by our computers.

                                                2. 1

                                                  C doesn’t have unbounded integers. A bignum library is still written in C. The C standard requires that implementations have bounded memory.

                                                  1. 1

                                                    Of course, all programming languages run on computers with bounded memory. So I guess your point is that the higher level of precision in the C specification means one cannot, incorrectly, interpret the standard as permitting “integers” that have infinite range?

                                                    1. 1

                                                      Of course, all programming languages run on computers with bounded memory.

                                                      No they do not. Languages don’t ‘run’. You can give them meaning with semantics, and there are many possible interpretations of C, some of which are implementable.

                                                      So I guess your point is that the higher level of precision in the C specification means one cannot, incorrectly, interpret the standard as permitting “integers” that have infinite range?

                                                      It’s not that C has a ‘higher level of precision’. In fact the C standard is not precise at all. It has lots of ambiguities, and it has a lot of behaviour that is simply left undefined.

                                                      The entire concept of turing completeness is that it’s about operations of potentially unbounded size. You can absolutely implement a turing machine. All you need to do is say ‘whenever an operation would go beyond the bounds of the tape, extend the tape’. A turing machine doesn’t have actually infinite memory, it has potentially infinite memory.

                                                      C in comparison is required to have a fixed amount of memory. Of course you cannot give it an actually infinite amount of memory, but you could implement it in a ‘if we take up too much space add more space’ kind of way as is possible with Brainfuck. But because of the specification, this isn’t even possible in theory.

                                                      1. 1

                                                        The entire concept of turing completeness is that it’s about operations of potentially unbounded size. You can absolutely implement a turing machine. All you need to do is say ‘whenever an operation would go beyond the bounds of the tape, extend the tape’.

                                                        But, in fact, there is no computer or computer system in the world that can actually do that.

                                                        My bridge design is better than yours, because yours has a finite weight bearing specification and my specification is to add additional trusses whenever you need to.

                                                        1. 1

                                                          But, in fact, there is no computer or computer system in the world that can actually do that.

                                                          What does that have to do with anything? There’s no computer in the world that can actually represent an arbitrary real number either. Does that mean that the mathematics of real numbers is any less interesting or useful? Not at all. Turing-completeness is computer science. It’s theoretical.

                                                          My bridge design is better than yours, because yours has a finite weight bearing specification and my specification is to add additional trusses whenever you need to.

                                                          This isn’t about bridges, it’s about Turing-completeness. You don’t need to use analogies, the subject at hand is able to be discussed directly. Nobody is saying C11 is a bad specification. This is not a ‘Your language design is worse because it’s not Turing-complete’. It isn’t Turing-complete. That’s all there is to it. It’s not a value judgement.

                                                          Also, your analogy implies there’s something vague about ‘extend the tape whenever you have to’. There isn’t.

                                                          1. 1

                                                            Does that mean that the mathematics of real numbers is any less interesting or useful?

                                                            I’m utterly at a loss to see where you are going with this. If I state that Zk is a finite group that doesn’t mean I think there are no infinite groups or that the theory of infinite groups is not interesting. Programming languages describe programs for actual computers, they are very different from e.g. lambda calculas or PR functions. I guess it’s possible you are using “programming languages” in some sense that applies to mathematical notation for effective computation, but otherwise you appear to be arguing that the approximation of Z in, say, Haskell or Lisp is the same as Z because nobody bothers to tell you in the language specification the obvious fact that the approximation is not the same as Z because it rests on finite length bit patterns.

                                                            1. 1

                                                              I’m utterly at a loss to see where you are going with this. If I state that Zk is a finite group that doesn’t mean I think there are no infinite groups or that the theory of infinite groups is not interesting.

                                                              Then why are you talking about what real computers can actually do in a discussion about Turing-completeness? That’s like people talking about some property of infinite groups and you going ‘but that doesn’t hold in finite groups’. Yeah nobody is talking about finite groups. What computers can ‘actually do’ has literally nothing to do with whether programming languages are Turing-complete.

                                                              Programming languages describe programs for actual computers, they are very different from e.g. lambda calculas or PR functions. I guess it’s possible you are using “programming languages” in some sense that applies to mathematical notation for effective computation, but otherwise you appear to be arguing that the approximation of Z in, say, Haskell or Lisp is the same as Z because nobody bothers to tell you in the language specification the obvious fact that the approximation is not the same as Z because it rests on finite length bit patterns.

                                                              Scheme is basically untyped lambda calculus with some built-in functions and Haskell is basically typed lambda calculus with some built-in functions, some types and a bunch of syntactic sugar. Programming languages are formal languages. Whether people have implemented them on computers has nothing to do with this discussion. They’re formal languages that can be given formal semantics and about which questions like ‘Is there a limited number of algorithms that can be expressed in this language?’ can be asked.

                                                              Haskell and Lisp don’t have an ‘approximation of Z’. They have things called integers that behave like mathematical integers. For example, R4RS permits implementations to restrict the range of its types.

                                                              Implementations may also support only a limited range of numbers of any type, subject to the requirements of this section. …

                                                              Implementations are encouraged, but not required, to support exact integers and exact rationals of practically unlimited size and precision, and to implement the above procedures and the / procedure in such a way that they always return exact results when given exact arguments. If one of these procedures is unable to deliver an exact result when given exact arguments, then it may either report a violation of an implementation restriction or it may silently coerce its result to an inexact number. Such a coercion may cause an error later.

                                                              But nowhere does the standard require that integers have a maximum length like C does. Nowhere does the standard require that at most N objects may be represented in memory like C does. It permits these restrictions but it doesn’t require them.

                                                              This means you can consider a hypothetical implementation where any integer is valid, where memory is unbounded, etc. and consider whether it’s possible to implement a Turing machine. And of course, it is. That’s why Scheme is Turing-complete. C, on the other hand, makes such an implementation illegal.

                                                              1. 1

                                                                Then why are you talking about what real computers can actually do in a discussion about Turing-completeness?

                                                                Because you brought up programming languages where, as in your spec, “integers” may have “practically unlimited size” - a specification that is impressively imprecise, but sufficiently clear to make the point.

                                                                As for scheme being untyped lambda calculus , you can twist yourself into a pretzel and find some justification for set! (and set-car, set-cdr ) if you want, but I don’t see the utility or find it that interesting.

                                                                1. 1

                                                                  Because you brought up programming languages where, as in your spec, “integers” may have “practically unlimited size” - a specification that is impressively imprecise, but sufficiently clear to make the point.

                                                                  The specification does not state that integers may have practically unlimited size. That’s not a specification. It’s a non-normative suggestion to the implementer.

                                                                  The specification does not give a maximum size to integers. Why would it? It also doesn’t require that an implementation’s integers have a maximum size. Why would it?

                                                                  And programming languages are formal languages like any other.

                                                                  As for scheme being untyped lambda calculus , you can twist yourself into a pretzel and find some justification for set! (and set-car, set-cdr ) if you want, but I don’t see the utility or find it that interesting.

                                                                  I’m not talking about set!.

                                            2. 2

                                              I don’t understand this. At some point all languages end up with the same representation (machine code).

                                              The implementation of a language is not the same as the semantics of the language.

                                              Could some one given an example of a language that is theoretically Turing complete in the sense the author wants to imply?

                                              Brainfuck. It doesn’t bound the amount of memory you can access, which is how it achieves Turing completeness.

                                              Of course, an implementation on a real machine can’t have access to unbounded memory, but that’s irrelevant.

                                              1. 1

                                                I don’t understand this. At some point all languages end up with the same representation (machine code).

                                                This is not true. A language is a set of strings over some alphabet. In the case of most modern languages that alphabet is Unicode, but for some languages it’s ASCII and there are even some weirder examples. The syntactic rules of a language are used to determine which strings over that alphabet are valid. "main(){}" is I believe a valid C programme. "blah(){}" is possibly a valid freestanding C programme. Possibly the empty string is too.

                                                It’s possible of course for a language to actually have no syntactic constraints at all. Most have many. In very cases is the formal grammar in a language specification actually the real grammar. Things like ‘is a variable actually defined everywhere it is used’ are not normally included in those. They give a context-free approximation of the grammar.

                                                The representation of a language is the language. It’s not machine code. A translation into machine code is a semantics for a language.

                                                The semantics of a programming language are ‘what those strings mean’. What does main(){} mean? What does 1 + 2 mean? The specification of a language in some way constrains the possible semantic interpretations of the language. For example the C specification requires that if you cast a pointer to char* then back you’ll always get the same pointer. But at the end it also has a rule that relaxes the requirements on compilers: they may interpret any programme in a way inconsistent with the specification that is not observably different to an interpretation consistent with the specification. In other words, it’s the “as if” rule: you can interpret a program differently from what the spec says as long as the programmer can’t actually tell. This is what allows optimisation.

                                                Semantics don’t have to be in terms of a translation into machine code or the operation of an actual computer. Real formal semantics usually are not given in terms of the semantics of machine code. They can be given in terms of some sort of ‘virtual machine’ or hypothetical unimplementable machine.

                                                The C specification has some rules that appear to require that any implementation has only finite addressable memory. This is not true for example of the Haskell standard, as far as I am aware.

                                              1. 3

                                                Your experience unfortunately matches mine with generic solvers of all kinds (well, except one): its too slow for any input that could interest me. I’m amazed at how clear the write-up is despite things not going that well. I might try some of the same things but would be mum if asked to explain it.

                                                What happens if, to break symmetry, you just said “there are at most # rooms (R) talks at any time”? And remove ROOMS entirely from the model.

                                                Like

                                                constraint forall(t in TIMES)(
                                                    sum(talk_times[talk] == t) <= R);
                                                
                                                1. 2

                                                  I just tried that, and it’s pretty nice! You can rip out sched entirely. I couldn’t find a good way to add symmetry on times, but that plus a crude constraint talk_time[talks[1]] = TIMES[1]; gives you another 5x speedup.

                                                  This gave me an idea: what if we have a variable set of the talks for each time and link them via int_set_channel? In theory, this would give you another huge bonus.

                                                  array[TIMES] of var set of talks: talks_per_time;
                                                  
                                                  constraint int_set_channel(talk_time, talks_per_time);
                                                  
                                                  constraint forall(t in TIMES)(
                                                     card(talks_per_time[t]) = R);
                                                  

                                                  In practice, Chuffed can’t handle var sets :(

                                                  1. 2

                                                    Oh yeah, good idea. So you’re basically saying the talks are partitioned into TIMES sets, each of size at most R [1]. Looks like MiniZinc even has a builtin partition_set but Chuffed won’t handle that either.

                                                    I couldn’t find a good way to add symmetry on times, but that plus a crude constraint talk_time[talks[1]] = TIMES[1]; gives you another 5x speedup.

                                                    If instead of using partition_set, you wanted to do this by hand, you could continue along the same lines as the crude constraint and say “the lowest indexed talk scheduled at time i or later is always scheduled at time i” (for all i).

                                                    [1] I think you still need “at most” instead of “equal” unless you happen to have (# rooms) * (# time slots) talks (or add dummy talks nobody likes to get this to be true).

                                                  2. 1

                                                    solvers of all kinds (well, except one)

                                                    And that exception would be…?

                                                    1. 1

                                                      Solvers for problems with continuous valued variables (float values) with linear constraints and objective.

                                                      This might be a somewhat disappointing answer since the problems you can specify are much less generic. Usually needs some work just translating the problem into the input format (if it can be done at all), which is opposite to MiniZinc’s interface improvement.

                                                  1. 4

                                                    I think my breaking point on “more hardware beats more wetware” was when Gmail got too slow for me to use.

                                                    1. 4

                                                      As will have been evident, a guiding design principle for Inform was to imitate the conventions of natural language.

                                                      I think I would be a lot more sympathetic to this if “natural language” here didn’t in fact just mean “English”.

                                                      Basing a programming language off a natural language that is logical and consistent might actually be a good idea, but I don’t know of any actual attempts to try it. Basing a programming language on English is a terrible idea, because English is so terribly inconsistent, and you’re left guessing as to how the implementation is going to interpret a phrase where a human mind would have no trouble cutting thru the ambiguity. (Granted some of the ambiguity is inherent in natural language, but a good chunk of it is just legacy nonsense from our defective orthography. Tackling just the inherent ambiguities is hard enough without being stuck attacking both at the same time.)

                                                      The merits are: familiarity; […] conciseness, in that a lot of boring code becomes unnecessary; and perhaps a greater ease of expressiveness, because the lineaments of the language more closely follow our cognitive habits than would be true of, say, C++.

                                                      Using C++ as your measure of a non-natural programming language is quite the straw man; neither of these concerns have anything to do with the natural vs non-natural divide, just with good programming languages and bad programming languages. This passage seems to imply an unfamiliarity with non-natural programming languages which allow concise and expressive code.

                                                      Finally, it’s disappointing that this talk didn’t address the biggest problem in the Inform6->Inform7 shift: it changed from being free software to being another proprietary project.

                                                      1. 5

                                                        Basing a programming language on English is a terrible idea, because English is so terribly inconsistent, and you’re left guessing as to how the implementation is going to interpret a phrase where a human mind would have no trouble cutting thru the ambiguity.

                                                        In my experience, the natural language design in Inform does pose problems, but not exactly for ambiguity in interpretation. It’s more that in English, there are multiple ways to say the same thing, even if they differ by a few words. Of course, in a programming language there may also be multiple ways to “say” the same thing, but there are far fewer possibilities. Inform pretends to be a natural language, but it’s really a programming language, and you still have to be very particular about how you say things. It does work most of the time, because Graham Nelson did a great job of phrasing things in a way that makes sense, but it’s very easy to write a sentence and then spend the next 30 minutes wondering why it fails to compile, only to realize that you left off one word. (Or, my favorite: if you want to tell Inform to do something each time the turn counter increments, you say it by writing “Every turn, do XYZ”. If you type “Each turn, …” instead, it won’t have any idea what you’re talking about.) While the same problems, of course, abound in traditional programming languages – syntax is always the first thing to trip up people brand new to a language – the Inform IDE doesn’t do a good enough job, in my opinion, of highlighting potential errors in real time. I’m a bit disappointed that Graham didn’t mention any of this.

                                                        1. 4

                                                          From the talk:

                                                          One reason Inform hasn’t been open source in some years is that this infrastructure was such a mess. But not being open source is an existential threat right there.

                                                        1. 1

                                                          This looks like something I’m going to have to experiment with. Thanks for sharing!

                                                          1. 1

                                                            hillel, as our resident formal methods wonk, i’m very interested to hear if you think this idea has legs! when it occurred to me it seemed like something that would be obviously useful to have, which is not an impression i’ve managed to convey to anybody else. so i’d be very interested to hear if it resonates at all.

                                                          1. 13

                                                            You can’t call something an “objective argument” when your only evidence is “experience”. At the very least do a proper controlled study.

                                                            1. 7

                                                              I think I can if my definition is right. I go by popular usage of the words:

                                                              Subjective: Something that’s in one’s own mind whose form or reasioning outsiders cant see. Maybe also something derived from that.

                                                              Objective: Something in real world that’s measurable where we know we’re talking about the same thing.

                                                              Empirical: Builds on objective claims adding things like experiments.

                                                              The linked statement woukd be objective because it was based on real-world measurements we can all understand of language style and problems. It’s not scientific or emperical since there were no controlled studies and replication to be sure there was a causal link. This objective claim does get people thinking, though. It can also be starting point for empirical studies and claims.

                                                              That was my reasoning anyway.

                                                              1. 5

                                                                “Popular usage” is only an objective criterion if we accept your definition of “objective”. Which makes it a circular definition. “Objective” actually means “existing independent of or external to the mind” or “uninfluenced by emotions or personal prejudices”, as per Farlex. The argument you present sounds very much like the product of a particular mind, and very little effort has been made to examine it as part of a broader context.

                                                                Also, science isn’t rationalizing your claims with evidence. That’s just debate. Science is only unbiased when one starts from a clean slate and lets the evidence found speak for itself.

                                                                Finally, regardless of your intentions, these kinds of posts will always be seen as flamebait. This is something many people have a subjective view on, to little substantive end. But I appreciate that much of the response has been focused on the hard claims being made, rather than personal beliefs—even if it is ultimately personal beliefs motivating that response.

                                                              2. 1

                                                                I think it’s fine to say it’s an objective argument, but that doesn’t imply that it’s the most important argument. I think it’s a legitimate bullet point on a list of pros/cons, but a comprehensive list will have plenty of other bullet points too.

                                                              1. 9

                                                                I saw this a while ago and while helpful to some degree, I think it conveys a narrow view of what mathematical notation is about. Notation in mathematics isn’t fixed and it isn’t a programming language, and most of the time mathematicians are not even trying to approximate a programming language. A good mathematical text can be read out loud, in full sentences. Mathematical notation is just abbreviation for English or other natural languages. If you look carefully, you’ll observe that in papers that follow the American Mathematical Society style guide, a very influential style, they use punctuation like commas, periods and semicolons within mathematical notation, because it really is just abbreviated English.

                                                                The linked document on github covers a narrow subset of mathematical notation, which may nevertheless be sufficient for most things that computer people might be interested in, but may convey the wrong impression when going far afield from calculus, linear algebra, and statistics. An example from the document, the hat can mean a bunch of different things, and using it to represent a unit vector isn’t even all that standard from my experience. A unit vector may or may not be represented by a hat, or by an entirely different symbol, and a hat can also mean things as disparate as a Fourier transform in harmonic analysis or an estimator in statistics. It really depends what the author wants it to mean. A human reader should not be immediately thwarted by the presence or absence of a hat in an unfamiliar context.

                                                                I know this view of mathematical notation can be frustrating to computer people who are used to compiler errors and a very rigidly defined syntax, but mathematicians aren’t usually writing for computers: they are writing abbreviated natural language for other humans. Like all natural languages, mathematical writing is full of context dependence and ambiguity. Despite computer people’s aspirations, this fuzzy and human interpretation of mathematical writing isn’t going to go away soon, so it pays to be aware of it and try to get used to it. I say this because fighting it only results isolation and balkanisation of mathematical traditions, with computer people going off on their own doing their own thing with mathematical people on the other side of this divide.

                                                                1. 2

                                                                  Heck, you don’t even have to go that far afield for things to shift in meaning. I’ve seen some linear algebra books use 1 to mean an identity matrix instead of a number.

                                                                  1. 1

                                                                    It’s not so uncommon to use 1 for the multiplicative identity, a lot of german math books for example even call rings with unity, rings with 1.

                                                                    Edit: removed extra books.

                                                                1. 13

                                                                  You might find this interesting, it’s an attempt to predict bugs by language features. Unsuccessful, but still interesting enough for me to finish.

                                                                  http://deliberate-software.com/safety-rank-part-2/

                                                                  1. 5

                                                                    edit: Hey that is actually really cool and interesting, (the point about clojure is interesting too). It is also a pretty smart way to gather data in a situation where it is normally extremely hard to do so.

                                                                    Something I just read today too - less about bugs, but more about robustness

                                                                    http://joeduffyblog.com/2016/02/07/the-error-model/

                                                                    1. 3

                                                                      Thanks! Good link too.

                                                                      Speaking of which, I highly recommend learning Haskell. It’s a lot of work, but it’s really changed how I think about programing. I would absolutely go back and do it again. It really makes the easy things hard (tiny scripts) but the hard things easy. Very much worth learning in my mind.

                                                                    2. 1

                                                                      While Tail Call Optimization would certainly be nice to have in Go to improve performance, in practice it’s not a cause of defects because people just use iteration instead of recursion to accomplish the same thing. It doesn’t look as “nice” but you don’t get stack overflows.

                                                                      1. 1

                                                                        Arguably that could be said of all the things on that list. Every programming language community has idioms to best use the available feature set.

                                                                        Specifically for recursion, I was assuming that the mental shuffle to convert something from recursion (often more elegant and simple) to iteration would cause issues. Since the whole model doesn’t work very well, I clearly was wrong in multiple places, and this very well could be one.

                                                                        1. 5

                                                                          Specifically for recursion, I was assuming that the mental shuffle to convert something from recursion (often more elegant and simple) to iteration would cause issues.

                                                                          I could be wrong, but I suspect most developers find iterative algorithms more straightforward to write iteratively, not recursively, and consider writing them recursively a mental shuffle.

                                                                          It wouldn’t surprise me if comfort with recursive algorithms is a predictor of developer proficiency, though.

                                                                          1. 2

                                                                            You are probably right, but I’d guess now that is more because most developers work in languages that don’t support recursion. Originally, I was also going for the idea that it offers a way for the developer to make a mistake without realizing it. In this case, they don’t realize that recursion isn’t tail optimized, since the language allows it without warning. But since I have yet to see anyone use recursion unless they are used to languages with immutability (and even then they probably just use fold), it probably doesn’t come up much.

                                                                            As such, it probably makes sense to remove that item, which doesn’t change much, just slightly raises the “c-style” languages and lowers the “lisp-style”.

                                                                            1. 2

                                                                              but I’d guess now that is more because most developers work in languages that don’t support recursion.

                                                                              Most people think about problem solving in an iterative way. They’ll do this, then this, maybe this conditionally, and so on. Imperative. Iterative. Few people think of their problems in a recursive way without being taught to do so. That’s probably why most prefer iterative algorithms in programming languages.

                                                                              1. 3

                                                                                To fully shave the yak, I’d argue this is entirely a product of how programmers are taught. Human thinking doesn’t map perfectly to either format. Recursion is just saying, “now do it again but with these new values”, and iteration requires mutations and “storing state”. Neither are intuitive - both need to be learned. No one starts off thinking in loops mutating state.

                                                                                Considering most programmers learn in languages without safe recursion, most programmers have written way more iterative loops so are the most skilled with them. That’s all, and this isn’t a bad thing.

                                                                                1. 3

                                                                                  They might not either be very intuitive. Yet, educational experience shows most students pick up iteration quickly but have a hard time with recursion. That’s people who are learning to program for the first time. That indicates imperative, iterative style is closer to people’s normal way of thinking or just more intuitive on average.

                                                                                  Glad we had this tangent, though, because I found an experimental study that took the analysis further than usual. I’ll submit it Saturday.

                                                                                  1. 1
                                                                                  2. 2

                                                                                    I agree. And I think there’s a lot that just isn’t possible with that mindset.

                                                                              2. 3

                                                                                Specifically for recursion, I was assuming that the mental shuffle to convert something from recursion (often more elegant and simple) to iteration would cause issues.

                                                                                I think it really depends on the algorithm. To my understanding, mapping and filtering is a lot easier recursively, but reducing and invariants tend to be easier iteratively.

                                                                              3. 1

                                                                                I think I remember reading Russ Cox doesn’t like tail recursion because you lose the debug information in the stack traces.

                                                                                1. 2

                                                                                  This is a big pet peeve of mine: because many languages use pointers in stack traces, you can’t see what the values were at that time. I think storing the value instead of just the pointers would be expensive, but it sure would be useful.

                                                                                  1. 1

                                                                                    What information would you lose?

                                                                                    1. 2

                                                                                      I think that in this example, you’d think that initial directly called final:

                                                                                      def initial():
                                                                                          intermediate()
                                                                                      
                                                                                      def intermediate():
                                                                                          final_caller:
                                                                                      
                                                                                      def final():
                                                                                          throw "boom"
                                                                                      

                                                                                      This could make it extremely hard to debug if intermediate happened to modify state and it was the reason why final was failing.

                                                                                      1. 1

                                                                                        I think the call stack may be convenient for this purpose, but not necessary. I’m sure there are other (potentially better & more flexible) ways to trace program execution.

                                                                              1. 1

                                                                                If you rotate figure 3 by 90 degrees then the line passes through an even number on one side, which falsly computes as “outside”. So irregular polygons can break the algorithm.

                                                                                1. 1

                                                                                  I didn’t dwell on it deep, but I think the blue polygon near the end of the page has similar areas (e.g. the narrow “neck” part in the center), so it must be resolved somehow in the algorithm (just not described clearly) seeing that it’s drawn correctly. If I correctly understood that you allude to the crossing lines as problematic.

                                                                                1. 6

                                                                                  Making as much chocolate as I can in prep for strangeloop.

                                                                                  1. 1

                                                                                    Early welcome to Saint Louis!

                                                                                  1. 39

                                                                                    I think this is a really important personal step for Linus. I believe this is a genuine self-realization for him; I hope he figures out how to deal with being more empathetic.

                                                                                    I also think it’s a good thing for open source software. He’s been at the helm of the biggest open source project in the world, as the original creator of it for over 25 years. Linus is one of the most important figures in the history of OSS. His success guarantees him the status of a role model for a new and current generation of hackers, whether he likes it or not. I would argue that his toxic behavior genuinely has encouraged the very same toxic behavior you can see in some OSS projects. His blow-ups and personal attacks are such an easy way for maintainers and devs to rationalize their own bad behavior. I really hope that he follows through with his personal behavioral goals, not just for himself or the people he interacts with - but for the attitude and personality of OSS overall in the long run.

                                                                                    1. 6

                                                                                      I would argue that his toxic behavior genuinely has encouraged the very same toxic behavior you can see in some OSS projects. His blow-ups and personal attacks are such an easy way for maintainers and devs to rationalize their own bad behavior.

                                                                                      His ‘blow ups and personal attacks’ are waaaay overblown. He’s said a few rude things on the internet in the tens of thousands of emails he’s sent over the years. Nobody ever quotes the 99.99% of emails he sends that are perfectly nice. They cherry-pick the absolutely worst things he’s ever said then act like they’re the typical Linus Torvalds email.

                                                                                      1. 13

                                                                                        Nobody ever quotes the 99.99% of emails he sends that are perfectly nice

                                                                                        I disagree with this statement.

                                                                                        I believe that part of the reason that both Linus and Linux are as successful as they are today is that Linus provides strong technical direction and is an encouraging, helpful person who has built a community around Linux. Part of the reason we’re able to talk about this is that people do want to work with Linus, despite his occasional rants - otherwise Linux would have been forked years ago to kick him off the project. We’ve seen it happen elsewhere.

                                                                                        I also believe that Linus in sweary rude mode has hurt feelings and put people off kernel development. And not just Linus’ words personally, but those of other people who see him doing it and believe that this is an acceptable way to express disagreement.

                                                                                        These two things aren’t incompatible! We don’t have to paint Linus as some terrible fire-breathing gatekeeper to admit that perhaps his manner has upset people.

                                                                                        The fact is - it’s very easy to write without realising how your words affect people (perhaps especially so online). And the fact that people have told Linus that his manner is not helpful and he is now listening, is encouraging.

                                                                                        1. 2

                                                                                          I don’t understand what you’re disagreeing with. Obviously people literally have quoted his polite emails in the past, but not in the context of these discussions. These discussions always come down to cherry-picked examples of rudeness. It’s like the point people make that you never see a newspaper headline saying ‘no planes crashed today’, not because it’s untrue but because it’s not newsworthy or interesting. ‘Muslim family completely normal, not terrorists’ isn’t news, and seemingly whenever Muslims families are in the news it’s about terrorism, so people start to assume every Muslim is a terrorist, which is absurd. There are people out there that think every email Linux sends is rude.

                                                                                          If Linus Torvalds has scared a few people off of Linux kernel development it’s not because he swore in an email, it’s because a big circlejerk convinced them that he only ever swears in emails. It’s not because he’s occasionally rude, it’s because they’ve been led to believe that he’s always rude.

                                                                                          1. 12

                                                                                            These discussions always come down to cherry-picked examples of rudeness.

                                                                                            This is the statement that I’m disagreeing with - I don’t think they do. I think we spend more time discussing examples of rudeness because they’re more newsworthy in this context, sure. But nobody that I’m aware of has ever claimed that Linus is always rude. In a similar vein, I’ve read newspaper articles about plane crashes which referenced statistics on the likelihood of a plane crashing, but even if they don’t, any sensible reader has enough context to know that planes not crashing is the baseline.

                                                                                            If Linus Torvalds has scared a few people off of Linux kernel development

                                                                                            Consider that it’s not just about Linus personally, but about the example he sets. He is the BDFL, and people will follow his model, consciously or otherwise. There is an amplification effect.

                                                                                            1. 0

                                                                                              His ‘model’ is being perfectly normal! What is it about that people don’t understand? He speaks exactly the same as anyone else. He just does so in very high volume over a medium that is viewable by people all over the world and archived for eternity.

                                                                                              1. 2

                                                                                                I do agree that it’s on the broad spectrum of ‘normal’. A person in his position (role model) will be held to a higher standard though, and I think that’s perfectly normal too. I hope you’re equally happy for people to express their disapproval of his comments as you’re happy for him to express himself in whatever way he pleases (and to accept the consequences of doing so). Or perhaps you think they should keep it to themselves? Double standards? I just really can’t see the connection between being personally abusive and the ability to say no (and if it’s such a rare occurrence, then it shouldn’t make a significant difference in that respect anyway). I’m also not a fan of the implication that he speaks ‘the same as everyone else’ (that just comes off as an excuse). I mean I understand what you’re saying; that everyone will lose their cool sometimes, and those are the times we tend to remember (although there are plenty of famous people who aren’t known for this kind of behaviour, so I don’t think his reputation is entirely undeserved). However, I can’t see how you can stretch it to disappointment that he has ‘given in’ to the masses. I get the impression that he has come to this conclusion more from personal interactions (which matter far more) rather than based on what Reddit thinks, or whatever. If that was the case, do you not think that would be perfectly reasonable? Why are you so intent on seeing this as a weakness? Is losing your cool and becoming personally abusive not also a weakness, and is it not valid to see potential for improvement in the way you communicate? Would you be so upset about this if no one had ever been critical of his comments?

                                                                                        2. 13

                                                                                          Nobody ever talks about the 99.99% of people I didn’t murder.

                                                                                          1. 1

                                                                                            Well, except you. :-)

                                                                                      1. 7

                                                                                        I just finished my work on Practical TLA+! Since that (and a bunch of other stuff) meant I had no social life over the summer, I’m kicking that back up by hosting a dinner party tomorrow. Haven’t yet decided what I want to make, though.

                                                                                        Sunday is probably going to be quarterly taxes and business cards and stuff.

                                                                                        1. 3

                                                                                          Re TLA+. Congrats on getting through with it!

                                                                                          Re food. The desert will be homemade chocolate.

                                                                                          1. 3

                                                                                            Dessert isn’t homemade chocolate, the chocolates come out after dessert :P Post-dessert chocolates are gonna be Earl Grey truffles and pumpkin-seed caramel bars. It’s the rest of the dinner I’m having trouble with.

                                                                                            I’ve mostly settled on jollof rice as the meat entree and mostly have my sides down, all that’s left is to figure out the vegetarian entree.

                                                                                        1. 5

                                                                                          In support of pruning non-determinism being less effective than avoiding it: if you have three concurrent agents with m,n,p respective atomic steps the total number of execution orderings is (m+n+p)!/m!n!p!. That gets really really big really really fast.

                                                                                          1. 2

                                                                                            well maybe, but you can also prune many of those steps as either stateless or nonconflicting (e.g., agent0 with M steps that all operate on a local variable that results in a message send, agent1 with N steps that all operate on a global state but transactionally with a commit at the end, and agent2 with P steps that all operate on data that never escapes the agent during the lifetimes of agent0 and agent1).