1. 2

    Twas brillig and the slithy toves did gyre and gimble in the wabe \ All mimsy were the borogoves and the mome raths outgrabe.

    becomes

    Tw℁ briᄔig & 𐅫 sliЋy t㍵es d🆔 ㏉𐅏 & gimЫe ㏌ 𐅫 w🆎e \ Aᄔ mi㎳y we𐅏 𐅫 borog㍵es & 𐅫 ㏁𐅕 rѦhs oﬔgr🆎e.

    Saves 13 characters!

    (edit: to clarify, I’m having fun with this)

    1. 6

      Work: Yesterday I finished the first draft of Practical TLA+. I have another 10 days to add new chapters before the official deadline, so I’m working on stretch goals. I’m also trying to find an accountant and a contract lawyer in Chicago. Want to start consulting in formal methods but gotta work out the business aspects first.

      Fun: Collecting all the proofs of leftpad people sent me and putting them in one place. It’s not ready yet but here’s a sneak peek.

      1. 1

        The article on the XAC reminded me of how their last cheap, easily modifiable hardware unit had so many positive impacts in completely unexpected ways.

        1. 3

          Terminal within vim now?

          From the article:

          The main new feature of Vim 8.1 is support for running a terminal in a Vim window. This builds on top of the asynchronous features added in Vim 8.0.

          Pretty cool addition. :-)

          1. 17

            Neovim has had this for over a year now. Neovim has been pretty great for pushing vim forward.

            1. 5

              I wonder if the new Vim terminal used any code from the NeoVim terminal. I know NeoVim was created in part because Bram rejected their patches for adding async and other features.

              1. 7

                I have to say, I really don’t care to see this in a text editor. If anything it’d be nice to see vim modernize by trimming features rather than trying to compete with some everything-to-everybody upstart. We already had emacs for that role! I just hope 8.2 doesn’t come with a client library and a hard dependency on msgpack.

                Edit: seems this was interpreted as being somewhat aggressive. To counterbalance that, I think it’s great NeoVim breathed new life into Vim, just saying that life shouldn’t be wasted trying to clone what’s already been nailed by another project.

                1. 6

                  Neovim isn’t an upstart.

                  You can claim that Vim doesn’t need asynchronous features, but the droves of people running like hell to more modern editors that have things like syntax aware completion would disagree.

                  Things either evolve or they die. IMO Vim has taken steps to ensure that people like you can continue to have your pristine unsullied classic Vim experience (timers are an optional feature) but that the rest of us who appreciate these changes can have them.

                  Just my $.02.

                  1. 2

                    Things either evolve or they die.

                    Yeah, but adding features is only one way to evolving/improving. And a poor one imho, which results in an incoherent design. What dw is getting is that one can improve by removing things, by finding ‘different foundations’ that enable more with less. One example of such path to improvement is the vis editor.

                    1. 1

                      Thanks, I can definitely appreciate that perspective. However speaking for myself I have always loved Vim. The thing that caused me to have a 5 year or so dalliance with emacs and then visual studio code is the fact that before timers, you really COULDN’T easily augment Vim to do syntax aware completion and the like, because of its lack of asynchronous features.

                      I know I am not alone in this - One of the big stated reasons for the Neovim fork to exist has been the simplification and streamlining of the platform, in part to enable the addition of asynchronous behavior to the platform.

                      So I very much agree with the idea that adding new features willy nilly is a questionable choice, THIS feature in particular was very sorely needed by a huge swath of the Vim user base.

                      1. 6

                        It appears we were talking about two different things. I agree that async jobs are a useful feature. I thought the thread was about the Terminal feature, which is certainly ‘feature creep’ that violates VIM’s non-goals.

                        From VIM’s 7.4 :help design-not

                        VIM IS… NOT design-not

                        • Vim is not a shell or an Operating System. You will not be able to run a shell inside Vim or use it to control a debugger. This should work the other way around: Use Vim as a component from a shell or in an IDE.
                        1. 1

                          I think you’re right, and honestly I don’t see much point in the terminal myself, other than perhaps being able to apply things like macros to your terminal buffer without having to cut&paste into your editor…

                  2. -1

                    Emacs is not as fast and streamlined as Neovim-QT, while, to my knowledge, not providing any features or plugins that hasn’t got an equivalent in the world of vim/nvim.

                    1. 7

                      Be careful about saying things like this. The emacs ecosystem is V-A-S-T.

                      Has anyone written a bug tracking system in Vim yet? How about a MUD client? IRC client? Jabber client? Wordpress client, LiveJournal client? All of these things exist in elisp.

                      1. 3

                        Org mode and magit come to mind. Working without magit would be a major bummer for me now.

                1. 23

                  Is there a

                  version that doesn’t

                  require me to scroll

                  a page for

                  every three words

                  of content?

                  1. 4

                    The heavy page is discrimination against poor people with crappy phones/plans and folks on dialup or pseudo broadband. They might leave the site due to its user experience. Then, mathwashing will remain a non-issue to them. Was this…

                    Accidental: When good intentions are combined with a lack of knowledge and naive expectations about people’s level of Internet access or economic class.

                    On Purpose: Because people don’t question decisions from site owners about bloated web sites, this faith that this education attempt was sincere can be abused.

                    Two things the author should realize:

                    1. Technologists designed ways to deliver slim, web pages to a wide audience.

                    2. Site owners should use them automatically.

                    1. 2

                      naive expectations about people’s level of Internet access or economic class

                      I suspect this is the case. People forget that WWW means world wide web.

                      To answer @hwayne’s question, the most readable version I saw is the one displayed by links.
                      The whole content fill 3 screens on my monitor.

                  1. 2

                    I didn’t know the language supported adding type annotation to functions… that’s neat!

                    1. 2

                      They were added in Python 3.5 I think. There’s no way to enforce them at runtime, unfortunately, they’re essentially just “type hints” that can be read by tools like Pyre to point out type errors.

                      1. 1

                        You can enforce them at runtime with decorators from third party libraries, for example enforce. It’s similar to how you can’t enforce them at “compile” time either; instead, you call mypy.

                    1. 3

                      I’m sure I’ve mentioned this before, but I’ve bought several books because of your reviews. Keep up the good work :)

                      1. 1

                        Haven’t bought any yet, but really enjoying the reviews, yeah.

                      1. 3

                        Follow-up to this discussion, for those who missed it.

                        1. 3

                          My main takeaway from the challenge is “it’s really easy to goad formal methodists into writing tons of really good articles on formal methods”

                        1. 3

                          I’m just shocked to learn mysql finally has window functions.

                          1. 5

                            Rich Hickey: When a library breaks, it can break in many ways. Some of those may or may not be manifest in types, others would just be manifest in behavior, or missing information, or additional requirements - things that you can’t express in types, because most of what your program needs to do can’t be expressed in the type systems we have today. So yes, it still takes a string and still returns a map of information, but it stopped returning you some of that information, or it started returning other stuff, or it had additional requirements about the string… No, the types don’t capture that.

                            It seems like every talk or interview coming from Rich ever since Cognitect started hawking clojure.spec contains at least a handful of poorly supported, vague, dogmatic claims about static typing in opposition to…well, “how Clojure does it.”

                            Perhaps his style hasn’t bothered me up until recently simply because I’ve mostly agreed with his dogmatic statements about stuff like persistent data structures, but at this point I’ve lost interest in a lot of what he has to say because of how he talks about static typing.

                            There are a lot of other ways he could be talking about clojure.spec and why it works well in Clojure. A more nuanced appraisal of the trade-offs of clojure.spec vs. various static typing approaches would be a nice start, but I am skeptical that will happen any time soon.

                            I still think Clojure is a better language than many, and gets a lot of things right, but it’s not perfect and this is one area I think the creator is mistaken. While that’s fine and to be expected, I think he’s doing real damage by making statements about static typing that are either too vague to be useful, or misrepresent how static-typing advocates think about and use type systems.

                            1. 5

                              It seems like every talk or interview coming from Rich ever since Cognitect started hawking clojure.spec contains at least a handful of poorly supported, vague, dogmatic claims about static typing in opposition to…well, “how Clojure does it.”

                              I’m not too familiar with clojure.spec, but I know a bit about the overall idea of it. Let me see if I can explain the difference between static types vs clojure.spec from a formal methods perspective (which differs a little from the PLT perspective).

                              Clojure.spec is a contract system. A contract is a formally specifiable property of the code, usually as pre/postconditions on functions, that you expect the code to follow. You’re able to, through the contracts alone, specify the complete behavior of the function via specs. For example, using Deadpixi Contracts in Python:

                              @require("l must not be empty", lambda args: len(args.l) > 0)
                              @ensure("result is head of list", lambda a, result: result + tail(a.l) == a.l)
                              def head(l: List[T]) -> T:
                                  return l[0]
                              

                              With this we have that head is only specified for nonempty lists, and also that its result will always be, in fact, the head of the list. This is not something we can do with the type system of Python, or even the type system of Haskell, as the type of head is indistinguishable from the type of last. We’re calling all sorts of other functions and can, in fact, run arbitrary code. In fact, we can completely decouple the specification of contracts from the verification of them. This is both a strength and a weakness. The strength is that we get both expressive power and flexibility. The weakness is that expressive power is usually a bad thing. In the general case contracts are unverifiable due to the halting problem.

                              In practice, there’s four approaches to verifying contracts:

                              1. Restrict yourself to a subset where you have simple, automatic static verification. For example, you might not be able to autoverify “this function will be called only with positive even numbers”, but we can autoverify “this function will be called with only integers”. This gives us static typing! I think you could reasonably argue that “type systems are special cases of contract systems”, but that’d get you stabbed to death in 99% of programming forums out there, so uh yeah
                              2. Limit verification to throwing runtime errors. Every time a contract comes up, check if it’s correct, and throw an error if isn’t. Most contract-oriented languages combine this with static typing to get “conventional” contract systems. You can do a lot of cool stuff with this. Eiffel’s AutoTest can turn your runtime contracts into integration tests, Ada can place contracts on global mutations, most systems let synthesize contracts into dynamic types, etc.
                              3. Formally prove the contracts correct. This is formal verification. A lot of people are doing this in different ways: Dafny uses pre/postconditions and loop invariants, Liquid Haskell uses refinement types, Idris uses dependent types, etc.
                              4. Informally prove the contracts correct. This is how we get Cleanroom, which is actually a lot more effective than you’d think. People write the contracts, attach english “proofs” of why they hold, and everybody verifies them through code review.

                              So, in summary: contracts generalize static types in a form that is good in some ways, bad in others. There are multiple different styles of contracts, just as there’s multiple different type system, but the unifying idea is that they can fully specify the program’s behavior. In verifying them is another matter, and clojure.spec’s approach is “runtime checks” as opposed to most other languages, which reduce the specification power in return representing contracts as static types.

                              1. 1

                                It’s worth noting that the halting problem also affects Turing complete type systems, such as one found in Scala.

                                1. 1

                                  Which is a reason you really don’t want your type system to be Turing complete!

                              2. 3

                                It seems like every talk or interview coming from Rich ever since Cognitect started hawking clojure.spec contains at least a handful of poorly supported, vague, dogmatic claims about static typing in opposition to…well, “how Clojure does it.”

                                I haven’t been paying all that much attention to the talks more recently, but this is also how I feel about it. There are trade-offs between static types and specs, and specs have some advantages over types, but because of all the straw-man arguments it’s hard to find useful analyses of the trade-offs.

                                1. 3

                                  The problem is that most of the “types vs ‘specs’” arguments are between people who have languages with types and no ‘specs’ and people who have languages with ‘specs’ and no types. If you want to see a more nuanced comparison, you have to look at languages that have both of them, because then you can see how people proficient in both context-switch between them.

                                  The other problem is that there are many more languages with a lot of thought to their type system than languages with a lot of thought to their contract system. If you need both, you’re pretty much limited to Ada.

                                  1. 1
                                    1. 6
                                2. 3

                                  I fail to see what’s vague or dogmatic about his statements. He’s basically saying that types primarily focus on checking internal self consistency, while what you really care about is semantic correctness. Expressing semantic correctness using types ranges from being difficult to impossible depending on the type system. At the same time static typing can introduce a lot of complexity that’s incidental to the problem being solved. You often end up writing code in a way that facilitates static verification as opposed to human readability.

                                1. 1

                                  Is this just a frontend for graphviz?

                                  1. 4

                                    No, it’s its own system. I used it for a while for flowcharts and sequence diagrams at work, and it does the job okay. The main benefit over graphviz is that it’s way easier to learn and can be embedded in markdown more easily. But if you want anything more complicated than a flow, or you want any control over the weights, you’re stuck with graphviz.

                                    1. 1

                                      Thanks. Might be useful for some but I’ll stick with orgmode where you can embed graphviz, ditaa and a whole bunch of others.

                                    2. 1

                                      It’s using d3 underneath for drawing, rather than Graphviz. Looks like someone liked the idea of PlantUML but wanted to build it in javascript (without Graphviz).

                                    1. 2

                                      Endless repetitions – we need to repeat constantly when writing it. It is error-prone as well as hard to maintain. YAML and JSON flavors do not support any fragments or smaller templating engine, so it is difficult to reuse and work in line with DRY (don’t repeat yourself) principle.

                                      Is this something you can get with node anchors?

                                      1. 1

                                        Great question! I believe you can’t, I’ve tried while ago.

                                        TL;DR: CloudFormation does not support that, see here: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/template-formats.html - it does not support hash merges, effectively you cannot use anchors.

                                      1. 5

                                        I guess I’m a little unclear on what this post was supposed to have shown? The original premise seems to have been that “People claim FP is more intuitive/more natural/better than IP, which is equivalent to saying it’s easier to make formal proofs in FP over IP.” Whether or not FP is better or worse than IP, I have a hard time seeing these two claims as related? Something that’s easy or difficult to prove doesn’t make it easy or difficult to write simply, to the standard of a normal software engineer.

                                        1. 7

                                          To me, this post provides pretty clear evidence that making a claim like: “Writing Haskell means that if it compiles, I know it’s correct,” is wrong. It’s shown that types alone aren’t enough to prove correctness and that proving correctness is laborious and hard, whether you have a heap to deal with, or not.

                                          I agree that making the jump from “intuition” to “provably correct” is not obviously related. But, by comparing the implementation of a formally verified proof in a FP language, vs. a formally verified proof in an IP language, it becomes pretty clear that formal verification is hard no matter what. The FP folks had just as much trouble as the IP folks. The FP folks wrote just as many pages of code as the IP folks did, i.e. FP didn’t provide more intuition, and wasn’t obviously better.

                                          1. 5

                                            One thing I completely forgot to say in the article was how hard it was for me to write these functions. The first two took about an hour each, the last one took about three. This was with no experience with writing code proofs, too.

                                            In this case what we’re measuring is “difficulty in writing a proof for an imperative algorithm” with “difficulty in writing a proof for a purely functional algorithm.”

                                            1. 1

                                              Congratulations on not only doing your first proven programs but making waves as you did do. Im curious, though, what you read or watched to learn how to do the formal specs. Esp if you think other beginners would find it helpful.

                                            2. 4

                                              The FP folks had just as much trouble as the IP folks.

                                              I agree, I think that’s the part that violates a widely held belief: many people believe intuitively that FP code should be easier to formally verify than imperative code, because of referential transparency etc. I’m not sure this is a widespread view among people who do verification, but it’s fairly common view among regular FP programmers.

                                          1. 3

                                            @hwayne you can call people bulldogs but I’m still super confused why you used leftpad as a “challenge” for functional programmers after seeing my functional version. Obviously it can be done. What’s the challenge?

                                            But the first paragraph of this blog post shows your main problem:

                                            Functional programming and immutability are hot right now. On one hand, this is pretty great as there’s lots of nice things about functional programming. On the other hand, people get a little overzealous and start claiming that imperative code is unnatural or that purity is always preferable to mutation.

                                            Functional programming allows:

                                            • Mutation AND purity
                                            • Imperative programming AND purity

                                            These aren’t antonyms, they’re complementary. That’s why I am always so confident that things can be done using FP with no downsides. In the worst case we can just take your code and embed it as pure functions. This fact exists irrelevant to me completing your challenge or not.

                                            I’m still reimplementing Sonic 2 btw.

                                            1. 6

                                              @hwayne you can call people bulldogs but I’m still super confused why you used leftpad as a “challenge” for functional programmers after seeing my functional version. Obviously it can be done. What’s the challenge?

                                              As I said in both the Twitter thread and the full blog post, the problem with your submission is that you assume all your stdlib functions conform to your spec. That means the prover is taking it as true without verifying it. It’s the equivalent of mocking everything out in a unit test.

                                              I told you that this, and your response was basically “it would be easy to fix but I’m not going to do it.” The people who actually tried to fix it found it much harder than you thought.

                                              In the worst case we can just take your code and embed it as pure functions. This fact exists irrelevant to me completing your challenge or not.

                                              As everybody who completed Fulcrum admitted, “embedding it as pure functions” was incredibly difficult and took several days to prove. It took one of the core developers of Liquid haskell almost four days to complete functionally. I hammered out my imperative proof in an afternoon.

                                              1. 1

                                                your submission

                                                I never submitted. I wrote some code to learn Liquid Haskell, you saw it and turned it into a challenge to functional programmers. But it obviously can be done, you know the assumptions aren’t hard to fix.

                                                It took one of the core developers of Liquid haskell almost four days to complete functionally.

                                                I am unsurprised since a lot of the Haskell’s standard library functions aren’t specified yet. Was the difficulty that part and not the actual verification?

                                                1. 6

                                                  But it obviously can be done, you know the assumptions aren’t hard to fix.

                                                  But it was hard to fix for people. Your claim that it’s easy is not one I can accept without good evidence.

                                                  For the record, as I also make clear in the post, these were supposed to be in ascending order of difficulty. It was supposed to be the easiest. I even admitted, in the post, that I overestimated how hard unique was!

                                                  I am unsurprised since a lot of the Haskell’s standard library functions aren’t specified yet. Was the difficulty that part and not the actual verification?

                                                  No, the difficulty was the actual verification. Rhanjit said that, artnz said that, Dave - who was using Isabelle, which has full specifications - said that. It was, in their experiences, a fundamentally hard problem.

                                                  1. 3

                                                    No, the difficulty was the actual verification. Rhanjit said that, artnz said that, Dave - who was using Isabelle, which has full specifications - said that. It was, in their experiences, a fundamentally hard problem.

                                                    It’s good to know that Dafny makes this easier. It doesn’t demonstrate that this can’t be done in FP as easily, just that our current tools lack the additional tooling to make it easy.

                                            1. 3

                                              This was great to see unfold and pretty funny at times.

                                              Leftpad. Takes a padding character, a string, and a total length, returns the string padded with that length with that character. If length is less than string, does nothing.

                                              Shouldn’t this be “string padded to that length” and “length is less than the length of the string”. The second isn’t that important but the first seems like it could have meant c*l + s (instead of c*(max(0, len(s)-l)) + s).

                                              1. 2

                                                Good point, fixed.

                                              1. 3

                                                Here’s the catch: I formally proved all three functions are correct. You have to do the same. And by “formally prove”, I mean “if there are any bugs it will not compile”. Informal arguments don’t count. Quickcheck doesn’t count. Partial proofs (“it typechecks”) don’t count.

                                                I’ve never really understood the distinction being drawn here. If I want to prove that e.g. the sum of two even naturals is always even, and I write something like:

                                                // definitions
                                                trait Nat
                                                trait Eq[A <: Nat, B <: Nat]
                                                trait Sum[A <: Nat, B <: Nat] {
                                                  type AB <: Nat
                                                }
                                                object Sum {
                                                  type Witness[A <: Nat, B <: Nat, AB0 <: Nat] = Sum[A, B] { type AB = AB0 }
                                                  def apply[A <: Nat, B <: Nat]: Sum[A, B] = ??? // could be implemented inductively if need be
                                                }
                                                trait Product[A <: Nat, B <: Nat] {
                                                  type AB <: Nat
                                                }
                                                object Product {
                                                  type Witness[A <: Nat, B <: Nat, AB0 <: Nat] = Product[A, B] { type AB = AB0 }
                                                }
                                                trait _2 extends Nat
                                                trait Even[N <: Nat]{
                                                  type M <: Nat
                                                  val witness: Product.Witness[_2, M, N]
                                                }
                                                // axioms
                                                def additionDistributive[A, B, C, AB, AC](witnessAB: Product.Witness[A, B, AB], witnessAC: Product.Witness[A, C, AC], sum: Sum[B, C], sumA: Sum[AB, AC]): Product.Witness[A, sum.AB, sumA.AB] = ??? //could be implemented inductively
                                                // proof
                                                def evenAPlusB[A <: Nat, B <: Nat, AB <: Nat](evenA: Even[A], evenB: Even[B], aPlusB: Sum.Witness[A, B, AB]): Even[AB] = new Even[AB] {
                                                     val sum = Sum[evenA.M, evenB.M]
                                                     type M = sum.AB
                                                     val witness = additionDistributive(evenA.witness, evenB.witness, sum, aPlusB)
                                                  }
                                                

                                                and say “it typechecks”, how is that different from a formal proof (indeed better, since if my proof is invalid in the sense of not following the rules for proofs, it won’t compile)? Of course there could be errors in my encoding of my axioms as function signatures, but that seems equally true for any form of formal proof.

                                                1. 5

                                                  Encoding proofs in type systems is totally valid. I added that because some people I was arguing with claimed that it typechecks always counts as a full specification of the behavior even though that’s clearly not true for most types of most functions. As I cover later in the article, several people did solve these problems via type systems.

                                                  Also, as covered by the article, all of the proofs, regardless of style, prevented compilation if incorrect.

                                                  1. 4

                                                    Also, as covered by the article, all of the proofs, regardless of style, prevented compilation if incorrect.

                                                    Well, they prevented compilation if the proof was invalid as a proof. They didn’t prevent compilation if the premises / claims were incorrect. You say in the article this became an actual issue in terms of the specification of unique.

                                                    Any typechecked program (in a language fragment where we disallow casts, unbounded recursion etc.) is a proof; the extent to which it is a proof that your proposition follows from your premises is the extent to which your types accurately encode that proposition and premises. So if I build (for example) a SortedList type whose constructors only allow it to be instantiated in ways that guarantee that it is sorted, then I’d argue that a function that returns a SortedList is formally proven to return a sorted list. Of course it’s completely possible for me to make a mistake and accidentally write a constructor that allows me to construct a SortedList that isn’t sorted - but this seems to be in the same category of errors as making a mistake in your formal specification of “sorted”.

                                                    Given that the article is trying to generalise a measure of how expensive/effective different verification techniques are from small examples, it’s really important to define clear and consistent criteria for the “boundary conditions” of what constitutes verification, as any difference in those will swamp the effect you’re trying to measure. Concretely I’d say you need a defined line between the parts where mistakes could lead to a program that produces incorrect output in some cases and the parts where no such mistakes are possible, and then a rule about what kind of statements a given entrant may include in the former (the premises/TCB/…). For the imperative-analysis style where you draw a sharp distinction between a program and its proof that line is obvious, but for other approaches it isn’t necessarily so, and differing interpretations can easily lead to applying a double standard to those approaches.

                                                1. 9

                                                  Last week of EMT classes. Just a practical and written final left and I’ll be qualified to apply for a state license. Then immediately shunting off to Boston for the Future of Alloy conference. I just found out that I’m speaking there, which is… good? But a little surprising.

                                                  For personal projects, I started a big Theorem Prover Showdown on Twitter. I have a postmortem to write and about 50 versions of leftpad to curate.

                                                  1. 3

                                                    I definitely should get a Twitter given all the fun I missed out on here. Going through the whole discussion right now. Someone even brought up Milawa I’ve often posted. Although, I semi-counter that Kumar’s thesis is so brilliant that I didn’t trust it because it seemed too brilliant with circular forms of reasoning. Easy to trick oneself with things depending on themselves and such. Was I really seeing proof of his claims or just not smart enough to spot the flaws in them? I’d probably need to practice formal verification for years before I could be sure. ;)

                                                    Regardless, Ramana Kumar, Jared Davis, and Magnus Myreen are all among my favorite researchers, though, given they’re kicking ass on some of hardest problems in verification from theorem provers all the way to assembly code. Davis, the Milawa guy, also extended ACL2 to replace tools that cost five or six digits a seat at Centaur: the third, x86, CPU vendor most don’t know about. Myreen was recently experimenting on something like Bluespec in HOL. I hope they all stay at it another decade cuz who knows what might come out of it. Exciting stuff.

                                                    Edit: I’m saving this since I think it nicely captures a larger trend in CompSci:

                                                    “I’m doing a writeup on this; all the opiners didn’t write proofs and all the people who wrote proofs didn’t opine.” (hwayne)

                                                  1. 9

                                                    For a teenager and an aspiring computer programmer, the 00s were a great time to learn.

                                                    You can replace ‘00s’ with ‘80s’ or ‘90s’ and this would still be true (I’ve heard this sentiment many times). Perhaps not the 70s or earlier, since home computers were not really a thing then.

                                                    I think the key point in this discussion/rant is that computers are mostly an appliance and a consumption device. Tools for creating things with them have gotten harder to work with over time. Part of that has to do with the complexity of the systems, but part of it is also the paucity of profits that come from providing such tools. The solution, such as it is, seems to be adding more options to our C compilers.

                                                    Also: it turns out Mastodon is just as bad for long threads as Twitter.

                                                    1. 2

                                                      I don’t think inherent complexity has much to do with why systems have gotten less flexible over time. I think we’re mostly looking at the result of changing norms.

                                                      Personal computers in the early 80s were marketed along the lines of “master for loops with this machine and you can take control over your life”. This wasn’t necessarily totally accurate (a lot of those machines had 8 bit integer arithmetic, so using them even for personal finances could be tricky), but it at least made clear that the point of the machine was that you’d set aside a half an hour with the manual and gain control over the machine in turn.

                                                      There was a concerted effort, spearheaded by Jobs, to turn general-purpose computers into single-function computing appliances, hide programming from non-technical users, and force the hobby community to turn itself into a much more professionalized “microcomputer software industry”.

                                                      I don’t think any of those things were really necessary – we used to have a distinction between workstations (big expensive machines used by professionals for important work and paid for by the company) and micros (small, buggy machines without memory managers, where commercial software was thin on the ground and you were really expected to write everything yourself even if you weren’t a programmer), and that division was really empowering for both sides (even as, by clocking in or out, you could cross the boundary between small computing and big computing).

                                                      1. 6

                                                        I think you are misrepresenting the state of advertising and computers in the 80s.

                                                        Look at this ad for a TRS-80.

                                                        It’s not selling programming, it’s not selling for loops–it’s selling the software that solves the types of problems the user has, writing stuff and balancing the checkbook.

                                                        The distinction between workstations and micros is similarly incorrect. Workstations–say, HP or Sun or SGI boxes–were outnumbered by cheap PC-clones or IBM-ATs or Apple boxes or whatever, running boring business applications.

                                                        There’s this desire to say “Ah, but in the golden age of computing, where every user was a programmer-philosopher-king!”, but that just isn’t borne out by history.

                                                        1. 6

                                                          There’s this desire to say “Ah, but in the golden age of computing, where every user was a programmer-philosopher-king!”, but that just isn’t borne out by history.

                                                          Absolutely. Don’t get me wrong: just because I argue that dev tools were easier to work with in the 80s doesn’t mean they were good. It’s a subtle difference that some fail to take into account.

                                                          1. 1

                                                            Thank you for clarifying that, I understand your point more clearly now. :)

                                                          2. 1

                                                            I’m slightly exaggerating for the sake of emphasis. There was never a golden age of programmer-centric home computers, but there was a silver age of home computers that expected that most non-programmers would do a little programming, and catered to the middle-ground between novice and developer in a way that didn’t require a mission statement and career goals. (If you had a computer in your home, you probably wrote a little bit of code, and nobody was forcing you to write more.)

                                                            There were business-specific ad campaigns that focused on existing shrink-wrapped software, and were essentially ads for the software in question. But, those same manufacturers would have programming related campaigns. And, if you weren’t in the market for spreadsheets, you’d quickly find that if you wanted your computer to be anything more than an expensive paperweight, you’d need to either play a lot of games or learn simple programming concepts.

                                                            Sure, workplaces could use micros to run dedicated business apps. But, if someone bought that same machine for their own home, programming would be presented to them, by the machine and the machine’s documentation, as the primary way of interacting with the machine unless they bought third party shrink-wrapped software.

                                                            We’ve moved to a programmer/user division as part and parcel of a privileging of workplace deployment needs over exploration (as the thread mentioned) – a situation where users aren’t expected to be in full control over their machines anyway because they have sold their time to Moloch and must use their machine in only Moloch-approved ways. It’s fine that such systems exist (we all sacrifice our 40 hours a week into Moloch’s gaping maw), but it’s pretty stupid to have even the machines in our bedrooms set up like they expect our house to have a professional sysadmin and an internal dev team (to write new applications or edit the code of licensed stuff to meet our needs).

                                                            The ability, as a non-programmer, to write hairy messy code in the comfort of your own home, and the understanding that it’s expected of you to write code for yourself and nobody else, is really important. The alternative is that everybody who masters for loops thinks that they’re ready to work for IBM, and they end up using plaintext databases for password storage at a fortune 500 company because they don’t understand the difference between big computing and small computing.

                                                            1. 5

                                                              The ability, as a non-programmer, to write hairy messy code in the comfort of your own home, and the understanding that it’s expected of you to write code for yourself and nobody else, is really important.

                                                              Why? Why does this matter to non-programmers?

                                                              I have an app that streams movies and porn. I have an app that lets me tweet at other people who feel that tweeting is important. I have a web browser and an app to file taxes. I have an app to go look at my bank transactions and remit rent. I have an app to collect e-books I never get around to reading.

                                                              What problems do I have that require programming or number crunching that are not solved, better and more easily, by just using something somebody else wrote and leases back to me for the convenience?

                                                              And if I need to do something really weird, why not just hire a coder to solve it for me?

                                                              ~

                                                              I’m not even being facetious here. The argument for everybody knowing how to program is, increasingly, like the argument for everybody knowing how to cook, how to debate properly, how to shoot, or any other thing we used to be able functioning adults to be able to do.

                                                              It doesn’t matter anymore. It’s not required. Federation of skills and services is inefficient.

                                                              It’s obsolete.

                                                              1. 5

                                                                First off, spreadsheets are the most popular programming environment ever created. Yes, spreadsheets. It’s a form of programming, only it’s not called programming, so people do it. [1]

                                                                Second, people not exposed to “programming” are often unaware of what can be done. A graphic designer is given 100 photos to resize. Most I fear, would, one at a time, select “File”, then “Open”, then select the file, then “Okay”, then select “Image”, then “Resize”, then type in a factor, hit “Okay”, then “Save” and then “Okay”. One Hundred Times.

                                                                I think there’s an option in Photoshop to do that as a batch operation but 1) that means Photoshop has to provide said functionality and 2) the user has to know about said option.

                                                                As a “programmer”, I know there exist command lines tool to that and it’s (to me) a simple matter to do

                                                                for image in src/* do; convert -resize 50% $image dest/`basename $image`; done
                                                                

                                                                (I think I got the syntax right) and then I can go get a lovely beverage while the computer chugs away doing what the computer does best—repetitive taks. It’s a powerful concept that many non-programmers even realize.

                                                                [1] It’s scary how much of our economy depends upon huge spreadsheets passed around, but that’s a separate issue.

                                                                1. 2

                                                                  Oh, I’m quite aware of spreadsheets–but that’s not programming by the some people’s definition, because they don’t come with a bunch of manuals that non-programmers can cavort through.

                                                                  As for your second point, I see what you’re getting at–but the majority of people will keep doing things the dumb, slow way because they don’t think that learning a new way (programming or not) is easy enough or because there is simply no incentive to be more efficient.

                                                                2. 2

                                                                  The argument for everybody knowing how to program is, increasingly, like the argument for everybody knowing how to cook

                                                                  Yes, it is. If you know how to cook, then you aren’t at the mercy of McDonalds.

                                                                  I’m not really arguing for everybody “knowing how to code” in the sense that some people use that phrase.

                                                                  Every UI is a programming language. We’re stuck with shitty UIs because we think our users are unable to cross some imagined chasm of complexity between single-use and general-purpose, but that chasm is mostly an artifact of tooling we have invented in order to reproduce the power structure we benefit from.

                                                                  There’s no technical reason that you need to learn how to program in order to program – only social reasons. And, I don’t consider that acceptable.

                                                                  It’s fine if people decide to remain ignorant of programming (even when it’s literally easier to learn enough to automate some problem than to solve it with shrink-wrapped software). It’s not fine that the road to proficiency is being hidden.

                                                                  Ultimately, if a non-programmer requests a programmer to write some code, it’s typically done for money. It’s done for money because there’s a gap between the professional class of programmers (who write professional code with professional tools for money) and the non-programmer (who must embark upon a quest to become a programmer, typically with shades of career-orientation, before writing a line of code). But, the ability for a non-programmer to say “I’m not willing to pay you to spend five minutes writing this code; I’m going to spend twenty minutes and do it myself” is missing, because it’s not possible with current tooling to go from zero to novel-yet-useful code in 20 minutes. So, programmers (as a class) get to overvalue their services by using tools that require more initial study to use.

                                                                  Everybody knows how to cook (delta some tiny epsilon – very rich people who can eat out every night, small children, the institutionalized). Most cooks are not chefs (or even line cooks) – their success in cooking hinges on whether or not they are willing to eat the food they make, and so they don’t need to live up to the standards of paying customers. This provides a steady stream of people who already know how to cook enough to know that they like it, who can graduate on to professional cooking jobs, but it also provides a built-in competition for those professionals. And, it’s something that is only so widespread because there is an expectation that everyone can learn, an understanding that everyone benefits from doing so, and a wide variety of tools and learning materials covering the entire landscape from absolute novice to world-famous expert. Nobody mistakes being able to boil an egg for being able to stuff a deboned whole chicken.

                                                                  When there is no place for absolute amateurs, everybody with a minimum competence gets shunted into the professional category. This is a problem when there’s no licensing system. It’s a huge problem with the tech industry. We need to stop it. The easiest way to stop it is to make it easy for non-programmers to compete on relatively even ground with professionals – which isn’t as hard as it looks, because users have many needs that are too rare to be met by a capitalist system.

                                                                  (To give a cooking example: I like to put cinnamon and nutmeg in my omlettes. No professional cook would ever do that. So, if I want that wonderful flavor combination, I need to do it myself. Every user has stuff like that, where they would like their software to work in a particular way that no professional programmer will ever implement.)

                                                                  1. 2

                                                                    ooh ooh pick me pick me

                                                                    Most of the stuff we use is created by companies, which are trying to make it maximally useful for given effort, so they up covering, like, 90% of use cases. That could mean every person can get 90% of their stuff done with it, but it could also mean that it’s perfect for five people and only 80% useful for the other five. Programming can help (not fix, but help) patch up that 20%.

                                                                    In practice, though, most programming languages aren’t suited for duct-taping consumer apps. When I say “everybody would benefit from learning to program”, I’m thinking things like spreadsheets, or autohotkey, or maybe even javascriptlets.

                                                                    1. 1

                                                                      Yeah. There’s a tooling issue, in that most programming languages these days are made for programmers, and the ones that aren’t don’t play nice with the ones that are. This is a huge gap, and one that benefits capital exclusively.

                                                            2. 0

                                                              Tools for creating things with them have gotten harder to work with over time.

                                                              Really? Every modern browser has a built-in development environment!

                                                              1. 4

                                                                Even on mobile browsers? I think not.

                                                                1. 1

                                                                  This is a very good point. I was looking at this issue in the light of my own experience, which has been with personal computers of various vintages. But most new users come in to contact with computers through phones and tablets now!

                                                                  (I have copied program listings from magazines into my ZX Spectrum, to date myself).

                                                                  If we confine ourselves to MacOS/Windows, even these have good scripting environments that can be capable programming environments - PowerShell beats bash in this regard, I think.

                                                                  As an aside, in last year’s Advent of Code, a post was made on the subreddit complaining that an assignment built on a previously solved assigment (i.e. the code was to be reused). It turns out that this person solved the assignments on their mobile device and discarded the code after submitting a correct answer.

                                                                2. 4

                                                                  Every modern browser has a scripting language sandbox with a giant, awkward, broken, poorly-documented API, which you need internet access and a guide to even start on. No editor either. And then, to share your work, you need to buy an account on somebody else’s web server and learn to use SFTP. Most users don’t even know that writing javascript is something a regular person can do.

                                                                  In comparison, early home computers (including the IBM PC) ran BASIC by default at boot-up. You would need to go out of your way to override this by putting in a cartridge or floppy before you started the machine, and in many cases the machine didn’t come with any software other than BASIC. And, your machine would ship with a beginner’s guide intended to teach BASIC to people who could barely read, an “advanced BASIC” guide for people who couldn’t code but had read the beginner’s guide already, full API documentation, schematics for the machine, the source code for the BASIC interpreter (sometimes), and BASIC code for a handful of demo programs. An effort was made to ensure that every machine showed a clear, easy to follow path from end user to mastery over one programming language (and, typically, you got an only-slightly-muddier path in the documentation itself for progressing to a basic grasp of assembly language or machine code).

                                                                  For most people who have a web browser, programming is still “something somebody else does”. For anybody with an Apple II, Vic-20, C-64, TRS-80, Sinclair, BBC Micro, PC-8300, or really any home computer manufactured between 1977 and 1983 save the Lisa, programming is “something I could do if I spent a couple hours with these manuals”.

                                                                  1. 4

                                                                    This is incorrect. Firefox give you an editor in the form of the scratchpad. MDN documents almost all of the web APIs currently supported by browsers, and if that doesn’t float your boat the W3C spec + caniuse works as well. There are issues with the web, yes. Ease-of-entry is not one of them.

                                                                    Also while new and experimental features are buggy, by and large browsers are not buggy or awkward from a web developer or consumer’s POV.

                                                                    1. 3

                                                                      MDN documents almost all of the web APIs currently supported by browsers, […]

                                                                      That’s exactly what enkiv2 is saying:

                                                                      […] which you need internet access and a guide to even start on.

                                                                      There are issues with the web, yes. Ease-of-entry is not one of them.

                                                                      I disagree: you need to know what you’re doing in order to start making things. Most people don’t know how to open the devtools. (EDIT: and then, there’s the “ecosystem”, a huge pile of overengineered abstraction layers causing nothing but bloat.)

                                                                      by and large browsers are not buggy or awkward from a web developer or consumer’s POV.

                                                                      Iceweasel (from the Parabola repositories) has a bunch of bugs (search doesn’t work in the address bar, …), and is awfully bloated (takes a while to launch, uses half a GiB of RAM for 2 tabs, …), in my opinion.

                                                                      1. 2

                                                                        Iceweasel (from the Parabola repositories) has a bunch of bugs (search doesn’t work in the address bar, …), and is awfully bloated (takes a while to launch, uses half a GiB of RAM for 2 tabs, …), in my opinion.

                                                                        I should clarify. If you run a “normal” browser on a “normal” OS you won’t run into many issues. Also compared to the vintage computers the OP is referring to (especially the Apple II which had it’s startup sound specifically engineered to sound more pleasant since it crashed so often), the web is solid as a rock.

                                                                        1. 1

                                                                          Facebook & twitter are slow as molasses & glitchy on stock chrome on a stock windows 10 install on a brand new machine.

                                                                          1. 1

                                                                            I would argue that’s the developer’s fault. On vintage machines (and calculators), it’s just as easy or easier to produce a badly optimized solution that runs horribly. The current trend in web development is to force the client to do all the work which causes issues on less powerful machines.

                                                                            Also that’s anecdotal evidence. My experience with the facebook and twitter web experience using chrome, windows 10 on a Thinkpad T540p has been pretty good. Unless you have solid evidence that the web in general is slow and glitchy that statement has no backing.

                                                                            1. 1

                                                                              Man, if you’re going to consider a systemic problem (like “almost every major web app is slow and glitchy, and most of the minor ones too”) as though it’s a cluster of unrelated particulars and ask for proof of every one, I don’t know what to tell you. Using the web at all is pretty good evidence that the web is slow and glitchy, and the experience of writing web apps explains why they would be expected to be slow and glitchy in a pretty convincing way.

                                                                              I mean, maybe you just have really low standards? But, I don’t think it’s OK to cater to low standards in a systematic way, even if you can get away with it.

                                                                              1. 1

                                                                                Do you consider lobsters slow and glitchy? What about most blogs? Stack overflow? I can name tons of sites that get it right. The ones that don’t in my experience are few and far between. Facebook is the only popular site I can think of at the moment, but I really don’t think that counts since their native mobile app sucks just as much or more. Which would imply it’s facebook’s fault, not the web’s.

                                                                                News sites are generally bad but that’s an issue with ads, not the web itself. There are cultural problems in web development but from a purely technical pov I don’t think the web is a bad platform.

                                                                                1. 0

                                                                                  Do you consider lobsters slow and glitchy?

                                                                                  It took in excess of 20 seconds to load this comment, on a broadband connection. What do you think?

                                                                                  What about most blogs?

                                                                                  The only blogs that have what I would consider acceptable overhead are non-CMS-based minimally-formatted static HTML sites like prog21. The average blogger or medium blog takes tens of seconds to load. Depending on the platform, sometimes a blog page becomes a problem in the middle of reading an article, causing the tab to crash. (This isn’t necessarily an ad thing – it’ll happen on medium, which has no ads and no third-party or user-supplied scripts.)

                                                                                  Stack overflow?

                                                                                  Stack overflow has, on occasion, taken more than 10 minutes to load a single page on my machines.

                                                                                  So, from my perspective, most web sites do not have acceptable performance. Even fast sites are slower than they could be, given absolutely minimal effort. (And, this is not even considering the embarassing level of bloat introduced by web standards – just using HTML and HTTP expands the number of bytes that need to be transferred across the network to render a static page by a factor of eight or more over markdown+gopher.) In other words, even if performance was acceptable from a user perspective (and I’m a professional developer with a newish machine that’s been tuned to improve performance – anything that’s slow for me is a hundred times slower for the proverbial grandmother), there’s a lot of low-hanging-fruit in terms of improvement.

                                                                      2. 1

                                                                        Firefox give you an editor in the form of the scratchpad.

                                                                        Hidden deep enough in menus that, unless you knew it existed and were looking for it, you would never find it.

                                                                        MDN documents almost all of the web APIs currently supported by browsers, and if that doesn’t float your boat the W3C spec + caniuse works as well.

                                                                        Doesn’t ship with every offline browser. Isn’t linked to from the default home page.

                                                                        Ease-of-entry is not one of them.

                                                                        I think I’ve made my case that the web doesn’t do a fraction as much work to ensure that every end user finds it easy to get on the road to being a programmer as every mom-and-pop computer shop did in 1981.

                                                                        by and large browsers are not buggy or awkward from a web developer or consumer’s POV

                                                                        I disagree completely. Web developers are constantly complaining about things being awkward, inconsistent, or buggy – and front-end and back-end developers who switch to working with web standards for a project or two have every reason to sympathize.

                                                                        Just because web development has become marginally easier since 2006 doesn’t mean it was ever acceptable, in terms of effort/reward ratio.

                                                                        1. 6

                                                                          I think I’ve made my case that the web doesn’t do a fraction as much work to ensure that every end user finds it easy to get on the road to being a programmer as every mom-and-pop computer shop did in 1981.

                                                                          This is flagrantly false, between Stack Overflow, MDN, MSDN, W3Schools, and others.

                                                                          There is so much more information out there, better presented and better organized and better indexed and at lower cost, than there ever was in 1981.

                                                                          1. 1

                                                                            If you need to be told that it exists, then it isn’t accessible to people who identify as non-programmers.

                                                                            I’m not talking about the ease with which someone who has already determined that they would like to become a professional programmer can find documentation. That, obviously, has improved.

                                                                            I’m talking about the ease with which a completely novice user can wander into programming without any particular desire to learn to program, and learn to program despite themselves.

                                                                            (Some people in very particular fields still do learn to program despite themselves. Those people are mostly research scientists. I don’t consider that an improvement.)

                                                                            1. 6

                                                                              I’m talking about the ease with which a completely novice user can wander into programming without any particular desire to learn to program, and learn to program despite themselves.

                                                                              They only have to Google “How do I build a website”, “How do I write a website”, like this.

                                                                              Just because they aren’t rifling through thick manuals they bought with their micro doesn’t mean that non-programmers don’t have equivalent (or better!) resources.

                                                                              1. 3

                                                                                Just because they aren’t rifling through thick manuals they bought with their micro doesn’t mean that non-programmers don’t have equivalent (or better!) resources.

                                                                                Exactly. I get nostalgic about my Casio fx9750’s programming manual, but you won’t find me claiming it was a better resource than anything you could have found online. There really isn’t a good beginner alternative for solid documentation and question and answer sites.

                                                                                1. 1

                                                                                  If you find it online, it’s not a piece of documentation you have – it’s a piece of documentation you seek, that happens to be free and delivered quickly. You need to know that it exists, and you need to know how to find it, and both of those things are barriers.

                                                                                  For somebody to google “how to I write a website” they need to believe that a website is the appropriate way to solve whatever half-understood problem they have. Their problem may be something more like “how do I sort paid invoices by attachment type in paypal” – in other words, a useful feature missing from a popular service, which is best implemented by a shell script. Searching for this will not teach them how to solve the problem, because they didn’t put anything about programming in the query, because they don’t know that the best way to solve this is by writing some code. They will instead get zero relevant results, and instead of thinking “I should write code to do this”, they will think “I guess it can’t be done”.

                                                                                  1. 3

                                                                                    “how do I sort paid invoices by attachment type in paypal” – in other words, a useful feature missing from a popular service, which is best implemented by a shell script.

                                                                                    What?!

                                                                                    1. 1

                                                                                      Paypal will let you export a CSV of invoice summaries containing information about attachment names. So, the sensible way is to export that CSV and use shell tools to sort by attachment extension – in other words, write a couple lines of code to handle a corner case that the original developers of the site couldn’t forsee.

                                                                                      (This particular example is taken from my life. I’ve commissioned a bunch of artworks, and I want to separate those records from other unrelated invoices, so that I know which works have been finished and paid for even though it’s taken the better part of a year for them to be made & they’re not in any particular order.)

                                                                                      1. 1

                                                                                        Why not write a couple lines of js in a greasemonkey userscript so you don’t have to go to the trouble of exporting as CSV, opening a terminal, and running a shell script?

                                                                                        1. 2

                                                                                          Because attachments are never listed in the summary page (which also has a very small maximum pagination). Web services are intended for display, and not made accessible for further user-driven hacking – particularly financial systems like paypal – so doing this kind of work in a browser is made even more awkward than it otherwise might be.

                                                                                          Even had we a reasonable page size (say, ten thousand, instead of twenty) and the necessary information, javascript is going to be a much more awkward solution – we need to navigate arbitrarily-labeled tag soup in order to handle what is essentially tabular data. Using shell tools (which are optimized for tabular data) is easier.

                                                                                          Even so, this whole discussion is about what we, as hackers, would do. What hackers would do is basically irrelevant. The problem is that what a non-hacker will do to solve such a one-off problem is see if someone has already solved the problem, find that nobody has, and give up – when the ideal solution is for the non-hacker to be able to hack enough to solve the problem on their own.

                                                                                          1. 1

                                                                                            Fair enough.

                                                                1. 2

                                                                  For extreme examples, look up golfing languages, like cjam, golfscript, and pyth.

                                                                  1. 2

                                                                    Am I the only one that thinks that all these netflix things are extremely over-engineered? The bulk of their content is not even served from AWS, but from boxes that are close to the eyeballs.

                                                                    I am not saying, I could build one in a weekend or anything like it, but what do all these servers do? There is hardly any user interaction, except search and maybe giving a rating. The search is also not that big, given the size of the catalog they serve per country. The traffic comes from local caches. What is all this for, except keeping engineers in the bay area busy?

                                                                    1. 19

                                                                      just a psa, I don’t and have never worked for Netflix, all of this is mostly conjecture from experience.

                                                                      sure, I think that micro service bloat is probably a problem that they have. and many of the FANG companies suffer from NIH (not invented here syndrome), in some cases because of (IMO) broken promotion processes that require engineers to ship “impactful” work at all costs, and in others just because they have an unlimited amount of money to spend on engineering time.

                                                                      That being said, even the most trivial problems become quite difficult at the scale that they’re working at – they have 125 million subscribers worldwide, which means peak time is almost all of the time. In addition, maybe you only use search and ratings, but what about admin UI’s? What do customer service teams use? What tooling do content creators use to get materials onto their platform, and what do they use to monitor metrics for content once it’s uploaded? What about ML and BI concerns, SOC2 concerns, GDPR concerns? I could go on forever perhaps. It’s very difficult to reconstruct all of the reasons for the way any platform evolved the way it did without getting a historical architecture overview. But! Their service is very reliable and their business is profitable, so they must be doing something right. (not that there isn’t always room for improvement)

                                                                      1. 15

                                                                        There was a good presentation at StrangeLoop last year: Antics, Drift, and Chaos. The short version is “Netflix is a monitoring company that, as an interesting and unexpected byproduct, also streams movies.”

                                                                        1. 1

                                                                          this is great! thanks for the link – I’ve got to get to strangeloop next year.

                                                                          1. 1

                                                                            What kind of monitoring do they do, do you know?

                                                                            1. 3

                                                                              We use Atlas for monitoring.

                                                                          2. 1

                                                                            The result and the press is not as important as the journey. Being able to failover that quickly such a huge infrastructure is impressive, but the most important part is how they managed to achieve this and improve their work-flow, resiliency, and many other things along the way!

                                                                            1. 1

                                                                              I assume these other boxes are Very Important^TM for authorization and provides the search/indexing functionality of their service. The CDN boxes they ship out do nothing but host the videos, and not all videos exist on each box, so something would have to handle directing you to the correct node.

                                                                              You can’t stream the videos if you can’t get authorization, so…

                                                                              1. 1

                                                                                Those boxes they ship to ISPs only hold a subset of content. They still have to deal with routing a request to the closest node with the content they want, and update the ISP cache box with that content when there’s a spike in demand for something that isn’t cached locally. If your AWS nodes are down and nobody on the ISP requested Star Trek in the last N hours, you’re up shit creek with the customer requesting it unless you have a good fail over strategy.

                                                                                I doubt those ISP cache nodes do local authentication or billing, either.

                                                                                1. 1

                                                                                  Do you know where the movie content lives though? I’d be surprised if any of it was served from AWS hosts, instead I’d expect it on a CDN somewhere. I don’t think @fs111 is saying that Netflix doesn’t do anything, but rather does their architecture actually make sense given what they do?

                                                                                  My two cents is that it is probably overengineered and that is probably because it happened organically because nobody really knew what they were doing. With hindsight we could probably say some things are needed or could be done simpler.

                                                                                  1. 2

                                                                                    The video content, at least as of a couple of years ago, is encoded by EC2 instances into a bunch of qualities/formats (some on demand, I believe?), which live in S3 and are shuttled to around to various ISP cache nodes as needed.

                                                                                    Netflix doesn’t use a CDN, they are a CDN.

                                                                                    1. -1

                                                                                      Netflix doesn’t run S3, though, which, for my point, is not different than outsourcing to some CDN.

                                                                                      1. 2

                                                                                        S3 isn’t geographically distributed at all. It’s RAID with a REST API. It’s nothing like a CDN – Netflix does all the CDN things (replication, dynamic routing-by-proximity, distributing content to multiple edges close to customers) at their own layer above the storage layer.