1. 16

    This article starts off with an easily defensible definition of cargo culting: it is fine to understand the what, but not the how, of a component in a system. That makes a lot of sense, I know that I want to secure communication between a user’s browser and my bank’s website, so I use SSL because I understand that it is the mechanism through which connections are secured even though I might not understand the cryptographic processes. I don’t ritualistically copy and paste code from old projects until I get a lock icon to show up in an address bar because my team observed that transactions aren’t modified when a lock icon is in the address bar, which is closer to the practice and danger of cargo-cult programming.

    This article misses the superstitious nature of the actual danger of cargo-culting. The HTTP header example of things that even senior developers might cargo cult is actually a good example of how badly it can go wrong. It’s a big problem to copy the HTTP headers in the responses from your last project to your new one because, “well, I think this is how I got a page to load in a browser last time.” Cargo culting is not understanding what caused the page to load last time, but reusing the same code anyway in hopes that it will happen again. Maybe Cache-Control: public made sense on my restaurant menu page, but now I’m serving one user’s bank statements to a different user and I can’t even start to figure out why.

    1. 6

      I think it’s pretty reasonable to have a pony tag if we’re going to expect it to keep coming up.

      I also would support adding an esolang tag for all the articles on things like brainfuck, APL, INTERCAL, forth, Chef, and so on.

      1. 11

        I get what you’re trying to say, but I’m concerned to see APL and forth listed as esoteric because it can make people who don’t know them to group them up together with languages that are intentionally obtuse, jokes, and satires, instead of with useful software development systems.

        1. 3

          Well, the way I figure it, there are basically only a few sorts of langauges that we’d encounter here:

          • Widely-used and/or widely-written-about languages (C, Python, Ruby, Javascript, Clojure, Erlang, Haskell, Lisp, SQL, Go, Java)
          • Flavor-of-the-month new hotness languages (Elm, most Javascript derivatives, maybe Pony here, Nix, whatever the fuck the urbit stack is, Ethereum)
          • Exotic platform specific or application-specific languages (APL, kdb/q, J, Prolog, COBOL, Fortran, MUMPS, Forth, Chef, Postscript, Smalltalk, Ada, Eiffel, Modula-x)
          • Joke languages and turing tarpits (Malborge, INTERCAL, brainfuck and friends, Piet, x86 assembly)

          So, what we don’t want to do is add a tag for every programming language. Least of all because we don’t want to leave out new stuff or waste a tag on some weird variant of left-handed object-coded Smalltalk.

          We also don’t want to leave submissions having primarily to do with a single language unmarked as such–an article on a Forth interpreter doesn’t really fall neatly into compilers (interpreters aren’t), compsci (no longer academic enough), hardware (closest but not really), software (unless it’s a new release). The best we get is probably api (which is wrong because it’s the wrong problem domain) or programming (because it involves computers somehow.

          (Incidentally, this sort of reasoning is why I am super bitchy when people just slather random tags on things and fail to use the correct ones–e.g., not marking submissions about releases using the software tag!)

          We don’t want to make a new tag for every language. We also don’t want to leave things untagged until a tag becomes available if they could at least fit in a broader category (say, languages). It’s tricky.

          Also note that the “new hotness” languages can either wane in popularity or become widely used, so their path to tagdom is pretty straightforward.

          ~

          I see your point about calling something like Forth esoteric, but…it is. So is Prolog, and Ada and Smalltalk and all sorts of other excellent languages that just aren’t used commonly. Some languages will never be popular, or will never be popular again.

          If it helps you sleep better at night, consider that all languages in production–as they are used for things they weren’t initially designed for–become satires of their own practices.

        2. 2

          Yes, but will the trend last? I recognize that Pony is being discussed a lot. However, unless it starts getting used, we may have to remove that tag in the future. Not really a big deal, I’m just pointing out that we haven’t yet seen any indication that it’ll stick around.

        1. 8

          Depends on the way you expect the program to be most commonly interacted with. Comparable programs like bc, awk, and sed each act differently when invoked without arguments. If you expect rolls usually be provided as arguments, be like awk and use an -f FILE option to accept rolls over a file instead (with -f - being stdin). Go with option 2 by printing usage if no files or expressions were provided. If you expect rolls to usually come from a file, be like bc and interpret each positional argument as a filename, and default to stdin if no positional filenames are provided.

          As far as output goes, your current format is not easy for other tools to parse. Tools usually expect a record-and-field format to output, typically with newlines separating records and tabs or spaces separating fields. I would expect your output to look more like this:

          1d6 1 6 3.5
          3d4+1 4 13 8.5
          ev: INVALID: invalid format
          2d10-1 1 19 10
          

          I could see how some people would recoil from a dense format like that, so you could make it align more nicely, or add column headers on stderr, or whatever. Or add an option for –pretty-print/–no-pretty-print. Just don’t make it impossible to apply awk, cut, sed, etc. to those results.

          1. 3

            Seconding this. My lazy heuristic for unixiness of a program is: can i feed it from sed/awk and consume it from sed/awk?

            So, stdin in for inputting lines, stdout for uglified values as above, maybe -e to eval immediately and return, and -f for slurping a file. No args should behave like cat, probably.

            1. 2

              There’s a few things you can do to deal with the “dense presentation” issue.

              • Always print a header line first
              • Document that people can pass the output through column -t to get a pretty table
              • Separate the fields on each line with a tab (\t) rather than a space. Since each field is very likely less than 8 columns wide, it will very likely always line up automatically

              Also, you probably already do this, but just to be sure: the “invalid format” error message should be written to stderr rather than stdout.

              (there’s a case to be made for always writing the header to stderr, too, but (a) that makes it more difficult to pipe through column -t, (b) it’s easy enough to strip from stdout with tail -n +2, and © some programs interpret “anything written to stderr” as “something bad has happened”)

            1. 25

              I feel uneasy about this talk. Sure, it contains true statements that I can fully endorse. But there is this undertone in it. It feels a bit like they would advice cooks to do away with knifes in the kitchen, because you can cut herbs with scissors and onions with a grater and switch over to buying ground meat, because knifes can be dangerous and you need experience to handle them. What a contrast is Rich Hickey’s talk Simple made Easy in comparison http://www.infoq.com/presentations/Simple-Made-Easy

              It seems like they are trying to solve a social / educational problem with code quality by leaving out features from the language. My feeling is, that in the end the lack of such features will be compensated in code bases with boilerplate code, leaving users with complex and large code bases.

              The only way to reduce complexity imho is careful design, design review and culture that values thought over commit frequency.

              1. 11

                I am not a Go fan by any stretch, so watching this video was an odd experience. I found myself agreeing with many of Pike’s statements but then not understanding how he wound up at Go. For example, I didn’t understand how the Go team got to interfaces and run-time subtyping. Objects are complicated, they combine many concepts (data, compute, identity, time, dispatching). Parametric polymorphism + structs seem to get you many of the benefits without the cost, and is more type safe (none of that interface{} stuff), and is simpler. The talk of the great simplicity of Go is not borne out, IMO.

                1. 7

                  but then not understanding how he wound up at Go

                  exactly my feeling

                  1. 3

                    run-time subtyping

                    That’s wrong. Interface sub-typing is structural and is checked by the compiler.

                    Parametric polymorphism + structs seem to get you many of the benefits without the cost

                    Except there are costs depending on how your implement it. Just because the costs aren’t important to you doesn’t mean they don’t exist.

                    The talk of the great simplicity of Go is not borne out, IMO.

                    It is borne out in my experience of using the language and watching others pick up, learn and use the language effectively.

                    1. 6

                      That’s wrong. Interface sub-typing is structural and is checked by the compiler.

                      Yes, the types are checked, but the dispatching happens at run-time and it is the only form of polymorphism in the language, which is what I was trying to express.

                      My point is not if Go is simple or not, but that in listening to the talk I, personally, felt a disconnection between what the speaker said and the language he got out of it. YMMV, I’ve been on lobste.rs long enough to know that you have a different opinion and experience than me.

                      Except there are costs depending on how your implement it. Just because the costs aren’t important to you doesn’t mean they don’t exist.

                      I’m not entirely sure what you mean here, since the types exist only at compiletime the implementation is a type checker (which the OCaml type checker can do on-par with Go given that it has a much more sophisticated type system) and it compiles down to the same code that would exist now, more or less. So your counter here doesn’t really have any merit given the technical aspects of implementing what I said. It could have non-technical costs, but you seem to be referring to implementation here if I read you correctly.

                      1. 1

                        I’m not entirely sure what you mean here, since the types exist only at compiletime the implementation is a type checker (which the OCaml type checker can do on-par with Go given that it has a much more sophisticated type system) and it compiles down to the same code that would exist now, more or less. So your counter here doesn’t really have any merit given the technical aspects of implementing what I said. It could have non-technical costs, but you seem to be referring to implementation here if I read you correctly.

                        If your implementation strategy is monomorphization, then you’re almost certainly going to pay for it with compile times. If Go had parametric polymorphism, and folks started using that in lieu of interfaces in some cases, then you’ll end up with more work for the compiler to do.

                        Aside from the compile time cost, you’ve now also got to deal with language complexity, implementation complexity and code complexity.

                        My point is not that one is better than the other, my point is that various approaches to polymorphism come with trade offs, and waving them away is without merit.

                        1. 2

                          If your implementation strategy is monomorphization, then you’re almost certainly going to pay for it with compile times. If Go had parametric polymorphism, and folks started using that in lieu of interfaces in some cases, then you’ll end up with more work for the compiler to do.

                          It doesn’t have to be, it can be a pure compile-time verification and compile down to interface{}.

                          Aside from the compile time cost, you’ve now also got to deal with language complexity, implementation complexity and code complexity.

                          I believe just parametric polymorphism would be simpler in both implementation and usage than interface that are structurally typed. The dispatch table alone requires a fair amount of work relative to this. But YMMV.

                          My point is not that one is better than the other, my point is that various approaches to polymorphism come with trade offs, and waving them away is without merit.

                          I’m not, though. My claim was that parametric polymophism + structs lets you do the existing structural typing, and you get type safety. I’m claiming that you get more for less.

                          1. 1

                            It doesn’t have to be, it can be a pure compile-time verification and compile down to interface{}.

                            … and now you pay for it with slower run time performance because everything is boxed. Adding a convenient way to write slow code will dramatically change how code in the language is written and its performance characteristics. It’s a huge trade off.

                            I believe just parametric polymorphism would be simpler in both implementation and usage than interface that are structurally typed. The dispatch table alone requires a fair amount of work relative to this. But YMMV.

                            Just parametric polymorphism is quite limiting. Are you really just thinking of ML style polymorphism with its eqtype kludge? Or do you also want the ability to specify more refined constraints like Haskell’s type classes? Or perhaps Go should grow ML’s module system to make up for the limitations of pure parametric polymorphism?

                            Parametric polymorphism is already present in Go in a limited form. It is blessed, but it is pretty amazing how far just a little bit will get you.

                            1. 2

                              … and now you pay for it with slower run time performance because everything is boxed.

                              No, polymorphism in Go already requires objects so the situation would be no different than now when it comes to polymorphism. Java has similar restrictions for parametric polymorphism.

                              Just parametric polymorphism is quite limiting.

                              It’s really not that limiting, with structs you can get what Go gives you now + extra type safety, unless I’m missing something.

                              Parametric polymorphism is already present in Go in a limited form. It is blessed, but it is pretty amazing how far just a little bit will get you.

                              Cool! Can you give me an example or point me in the right direction? As far as I am aware, Go only has subtyping polymorphism. Or do you mean the builtins like map?

                              1. 1

                                No, polymorphism in Go already requires objects so the situation would be no different than now when it comes to polymorphism. Java has similar restrictions for parametric polymorphism.

                                I’m well aware of this. My point stands. I was pretty clear: adding a convenient way to write slow code will dramatically change how code in the language is written and its performance characteristics. Parametric polymorphism would increase the amount of generic Go code, and thus increase the amount of slow Go code. (Given your implementation strategy of “box everything.”)

                                It’s really not that limiting, with structs you can get what Go gives you now + extra type safety, unless I’m missing something.

                                I don’t see any reason why we should believe that parametric polymorphism covers all use cases of structural subtyping.

                                Cool! Can you give me an example or point me in the right direction? As far as I am aware, Go only has subtyping polymorphism. Or do you mean the builtins like map?

                                That’s what I said “blessed.” Maps and slices and chans and pointers are type constructors. append, make, len and a few others have special built-in polymorphism.

                        2. 1

                          Yes, the types are checked, but the dispatching happens at run-time

                          Yes, but this is also the case in Scala or OCaml, isn’t it? (unlike Rust, C++ or Haskell which mostly use code specialization/monomorphization)

                          1. 1

                            I cannot speak for Scala, however that is true of the object part of Ocaml (which is very infrequently used), but Ocaml also has parametric polymorphism which does not implement subtyping, it just lets you express the relationship between types in an expression, and that is how one implements functions like map, in Ocaml the type is ('a -> 'b) -> 'a list -> 'b list, which is not possible to express in Go in a type-safe way.

                            So this isn’t about code specialization or code monomorphism, it’s about what the type system lets one express (or in this case, not express).

                    2. 5

                      One of the end results of this extreme focus on simplicity is that any Go code is relatively easy to understand. I cannot say the same for other languages.

                      Google wanted to build a language that wasn’t intimidating, that was easy to hire new people for, where you can shuffle devs from project A to B with little overhead, and Go is the result. For language nerds like me, this is boring. But I can see the business advantage. Hiring a dev to be productive within days (or hours) is a big win.

                      I do agree though, there are other ways for achieving the same end result. Go just takes the shortcut and forces you into this approach. Brutal, but effective.

                      1. 10

                        Brutal, ambitious, time will tell whether this effectively works

                        1. 5

                          For language nerds like me, this is boring

                          I consider myself a moderate language nerd. After six months of Go, two things have worked well for me.

                          One is generating the most valuable repetitive code. I’ve used emacs lisp rather than Go specific tools, due to familiarity.

                          The other is implementing little languages more expressive than Go where that expressiveness is more valuable than hand written Go. Those have been logic / query languages over graphs.

                          1. 5

                            Go is like the Simple English version of Wikipedia. It’s obviously simple to anyone who is already comfortable with English, like Go is obviously simple to anyone who has experience with C-descended curly-brace languages. The assumption is that the simplified version is easier for new speakers to pick up, which hasn’t really been studied. I’d like to see Google’s results here in teaching people to program with Go.

                            Rob Pike is right in that programming is moving toward multi-core and distributed processing, and Go is right to focus on it. The problem is the barrier to getting good at programming in this environment isn’t overcoming notation. We use it everywhere. People who aren’t strictly software engineers have been using APL and derivatives like K, and things like Excel formulas with great efficacy. Forcing them to use something like Go because it’s “simpler” would probably make their jobs harder, just like forcing you to write in simple English would make it more difficult for you to communicate. Just try writing something in Toki Pona. The barrier to becoming a good programmer in the world that Google is probably correctly predicting is more conceptual: understanding how distributed processes interact with each other. How channels work, how the stuff in the sync package works. Go isn’t doing much to simplify that, even adding its own notation for channels. Notation and syntax probably aren’t the problem.

                            1. 3

                              Your last statement appears to be at odds with the previous one. If notation and syntax aren’t the problem, then adding custom notation for channels doesn’t complicate things.

                              I don’t think Go is using syntax to create simplicity. It’s the type system, the tooling, and the standard library that are doing it. Since you can’t overengineer or overcomplicate things with hyper-elegant abstractions, like you could with Haskell or Scala, you’re restricted to a simple, albeit verbose, toolset.

                              Hence, your comparison with Simple English is on the nose. Limiting yourself to a simple vocabulary may cause things to become needlessly verbose, but that’s a tradeoff if you don’t want the cognitive overhead caused by succinctness.

                              Time will tell if such simplicity is better for the future of software engineering than the opposite, whatever that is.

                              1. 2

                                Interesting comparison to simple language Wikipedia. Even as a native speaker, it’s much easier to understand than Pynchon. Maybe this helps people learn English or not. I do know I would feel much more comfortable editing Wikipedia than Pynchon. They each have their place, but if the role of software in a company is to share information (more or less), one option is more appropriate.

                                1. 2

                                  I’m not convinced that comparing Go to Simple English is relevant. I agree that both languages share a goal of simplicity, but the former is a programming language, and the latter is a natural language, which makes the comparison hazardous.

                                  Simple English provides a simplified grammar, and Go is similar on this topic. But Simple English also provides a limited vocabulary, unlike Go which doesn’t try to limit your vocabulary in any way (libraries can be as large and rich as you need).

                                  The real question is not about grammar and vocabulary, it is about abstraction. Most complaints are about Go lacking generics and sum types. I’m not sure this level of abstraction finds an equivalent in the English grammar and vocabulary. It’s probably expressed using the existing grammar and vocabulary. It’s a “layer” above English.

                                  In other words:

                                  • a natural language like English will express an abstract concept like generics using its standard grammar and vocabulary;
                                  • a programming language like Rust or OCaml will integrate generics directly into its grammar.
                              2. 3

                                The only way to reduce complexity imho is careful design, design review and culture that values thought over commit frequency.

                                But there are many ways to increase complexity, and designing a language that doesn’t bring too much incidental complexity into your project is not a trivial task.

                                My feeling is, that in the end the lack of such features will be compensated in code bases with boilerplate code, leaving users with complex and large code bases.

                                “Are you quite sure that all those bells and whistles, all those wonderful facilities of your so called powerful programming languages, belong to the solution set rather than the problem set?” – Dijkstra

                                It feels a bit like they would advice cooks to do away with knifes in the kitchen, because you can cut herbs with scissors and onions with a grater and switch over to buying ground meat, because knifes can be dangerous and you need experience to handle them.

                                To me it feels like other languages tend to give you a swiss army knife containing a grater, a chainsaw and a scalpel, just to cut a tomato. Except C, which gives you a sword that makes it too easy to accidentally cut the planet underneath you in half, but has no handle.

                                1. 2

                                  “Are you quite sure that all those bells and whistles, all those wonderful facilities of your so called powerful programming languages, belong to the solution set rather than the problem set?” – Dijkstra

                                  It is pretty hard to channel dijkstra on this matter: http://www.cs.utexas.edu/users/EWD/OtherDocs/To%20the%20Budget%20Council%20concerning%20Haskell.pdf

                                  It definitely is a brilliant question.

                                  1. 3

                                    I don’t fully understand your criticism of the talk. What (I think) Pike says is that, when designing a PL, features should be made (a) few, (b) orthogonal and © carefully designed, so that their use woiuld be simple; and that this may require sime complexity in implementation, or hard thinking about design. Elsewhere he says that a reasonable amount of boilerplate is an acceptable price for conceptual simplicity. So I don’t think I agree with your feeling that it’s about taking essential tools away from programmers.

                                    Sure, lack of features will coerce programmers into certain programming style. But so does every programming language, they’re all “opinionated” by their nature.

                                    1. 4

                                      It probably all boils down to how we emphasize which side of that boilerplace<->abstraction trade off. I have a tendency to opt for a more functional approach with more statical guarantees and I have a bias against boilerplate, because I have seen it as a source of bugs.

                                      So choosing between Rust and Go, I would probably pick Rust (if my decision was based on language features only).

                                      1. 1

                                        I have a tendency to opt for a more functional approach with more statical guarantees

                                        Well, this makes sense.

                                        and I have a bias against boilerplate, because I have seen it as a source of bugs.

                                        Badly designed language features and unreadability caused by complexity are sources of bugs that Pike has seen, I guess.

                                      2. 2

                                        My criticism of the talk is probably not completely independent from my impression of Go. I can understand language designers opting for exceptions and opting for algebraic datatypes like Rust for error path handling.

                                        Sure, lack of features will coerce programmers into certain programming style. But so does every programming language, they’re all “opinionated” by their nature.

                                        I don’t believe that by cutting back features it will coerce programmers in to a programming style. if less features are in a language, the more code is built around lacking features, with arbitrary choices in “style”.

                                        I admire Go for one feature: gofmt was a terrificly good idea.

                                        1. 3

                                          My criticism of the talk is probably not completely independent from my impression of Go.

                                          Which is fair enough, eventhough Go designers admit that it has warts and lacks certain features that would be really nice to have (e.g., generics).

                                          I don’t believe that by cutting back features it will coerce programmers in to a programming style.

                                          I think languages nudge (or sometimes force) programmers in certain directions, e.g., by making certain constructs convenient to use (that’s why everything in Perl (4) is a regexp substitution), or by somehow forming a culture among the users (that’s why everything in Java is a final abstract factory factory factory).

                                          if less features are in a language, the more code is built around lacking features, with arbitrary choices in “style”.

                                          I’m not sure about that: if a certain language feature is lacking, one can just use another construct, or write code in a completely different way that will be natural for this language. In C++ (the language that caused Go to be designed), it seems, quite a lot of code is written for working around broken features.

                                      3. 1

                                        Add, Haskell was a far, far simpler language in 2001 than it (well, ghc) is today.

                                    2. 2

                                      I guess the core of my discomfort is that some tasks are easy if you have the knowledge about the code that the compiler already has.

                                      Thus in a minimal language, all such tasks not covered by the language designed become hard, sometimes impossible.

                                      One of the things I love about D, is it has provided a sane and sensible interface to access, act on and use much of the information the compiler has…. and then feed that back into the compiler.

                                      ie. Many features in D which in other languages would require compiler support, have been implemented by allowing an adult compile time conversation between the code and the compiler at compile time.

                                      1. 1

                                        seems like they are trying to solve a social educational problem

                                        Except the earliest writings say the core team set out to design a language for themselves.

                                        The only way to reduce complexity is careful design

                                        Agreed, and judicious use of tools to support those designs.

                                      1. 1

                                        I made a similar utility called jarg a while ago, as a way to be able to use the shorthand value syntax of HTTPie in other commands. It differs slightly in that it uses HTML JSON form semantics, and can also output YAML and form-encoding. I think jo has the better name though

                                        1. 1

                                          Self-documenting make to get similar functionality to rake --tasks, fab --list, etc. is pretty useful, but I don’t think it should be the default behavior. The general structure of Makefiles is mostly pretty great, but lack of introspection can be a problem. I’ve always wanted to be able to run make --targets to get a list of targets, having their descriptions would be great too. Being able to feed something like make --targets --no-phony instead of ls or find into entr would be amazing, would save me the trouble of listing my targets twice.

                                          1. 17

                                            This this this! I couldn’t agree more. Even in the PC-area, the secure boot “virus” is trying to creep people away from installing any operating systems that haven’t been signed. Even the same arguments for “security” are used as with banks which may tell you that they won’t let you do a transaction with an “insecure” rooted device.

                                            I don’t own a smartphone, and tell anyone who wants me to to go to hell. I can’t stand some of my friends, and now even family, using their devices while we spend time together in a restaurant, pub or somewhere outside in the city. And no, it’s not because there was no dialogue or something.

                                            The point about “consumption” is a very strong one, and it really makes you think: I bet most programmers became programmers by playing around with problems they observed on a PC. It is as simple as installing an IDE and learning a programming language. I don’t see an equivalent for smartphones, and more and more kids don’t even own PC’s these days. If you want to deploy an app on iOS, you even need to sign your XCode-installation and obviously have a Mac. I bet most iPhone-users are on Windows. For Android, developing on Windows is painful, so you are left with Linux as a beginner, and even there it’s hard to dig around in this unstable mess!

                                            Apps teaching you how to program most of the time look more like toys rather than real tools. But what do you expect? you can’t do much with a smartphone anyway.

                                            As sad as it is, I have little hope left for the future of open source software development as we know it. It seems as if more and more young people are drawn away by the big companies to be turned into willing, paying consumers. If I had a company and saw this opportunity, I’d probably do the same. The smartphone though is just a symptom of this development, the disease already started on the PC. Only those kids left with a PC really are able to join as new developers. And the longer I observe the “market”, the more I notice that the number of kids with a real computer at home and joining open source projects are becoming less and less.

                                            In the end, I’m sure that kids would rather play Minecraft than listen to a talk by the highly charismatic and sympathetic Richard Stallman, or play a puzzle on their iPad rather than thinking about solving programming problems. There’s always an app for that already.

                                            1. 19

                                              I’m not surprised or even worried that kids are more interested in playing Minecraft than listening to anybody talk. I felt the same way as a kid, except I had LEGO. That doesn’t mean they won’t grow up to discover and appreciate his ideas, it probably even starts them on a path toward it. I think it also has some deeper consequences: it nurtures the desire to build and explore. Maybe it even gets a few of them to start to question why it’s so fucking hard to tinker and explore in other situations, like, why is it so difficult to make a stupid little app for my phone? Every time I watch Inventing on Principle or The Future of Programming I get a little hopeful that these ideas will start to materialize on this or another new computing frontier.

                                              1. 8

                                                This is why I always say it’s no good to gauge the quality of our society by looking merely at how young people lead their lives: it’s better to give them time to grow up first.

                                              2. 5

                                                One things that really makes the difference between PC software and smartphones:

                                                My PC software really feels like it was built by the author–for the author, while the stuff I have seen on smartphones feels like niggling, ad-infested shareware that subtly tries to influence you to do things that make money for the author.

                                              1. 13

                                                TLDR: I don’t like smartphones because they are not PCs.

                                                This comment was brought to you using a smartphone.

                                                1. 5

                                                  But I’m sure you also own a PC for all the real work, right? The problem the author points out is that more and more people don’t even own PC’s anymore, given there are less and less primary incentives for that. Even kids nowadays mostly spend their money on expensive smartphones, there’s often no money left for a computer.

                                                  Later on, when a kid might come up with an app idea, which can we a secondary incentive, there then won’t be a PC to work on those things. It may sound funny, but this is a real problem, and it will be devastating.

                                                  1. [Comment removed by author]

                                                    1. 2

                                                      I totally agree with you - people who use an ipad today are unlikely to be have been type who used the PC as a creation device.

                                                      What we are slowly losing is the malleability of a PC in the house. I would bet that a there are many adults in comp-sci that started off with tinkering with the home PC that was probably bought to help the family do taxes or write school reports. It is becoming harder and harder to come across that kind of opportunity today.

                                                    2. 10

                                                      The implicit assertion here is that PC’s will remain the only viable way to make things. Something like TouchDevelop is still toy-ish but I’ve been able to make little games and apps while sitting at bars. I think what we have so far is primitive compared to the possibilities, there’s still so much to explore.

                                                      1. 9

                                                        But I’m sure you also own a PC for all the real work, right?

                                                        More and more people are shifting entirely to tablets and phones for “real work” – or more specifically – all their work. I have seen this first hand, a friend of mine has been living without a “classic computer” since the iPad Pro release. He does 100% of his work on his iPhone and has iPad Pro and claims he is more productive than ever. Companies are doing this as well – as iPads are easier to maintain.

                                                        I suspect (for better or worse), the general purpose programmable computer will be a specialized tool used by engineers and will fall out of the consumer space in the next couple decades. Developers will rage about it – and it won’t matter. Just like when people raged about the inability to repair their own cars due to growing computer control and complexity – and it didn’t matter.

                                                        1. 15

                                                          General purpose computers have not been an unqualified success for non-technical users; is it any surprise that people drowning in a foetid sea of viruses, malware, Windows, MDI, the OS X Finder, et al would grab hold of the first lifeline that allowed them to simply get on with their technologically mediated lives?

                                                          1. 2

                                                            More and more people are shifting entirely to tablets and phones for “real work” – or more specifically – all their work.

                                                            I don’t really buy it, because the tablet market is stagnating - sales are shrinking. Admittedly, the PC market also hasn’t been great, but in contrast to tablets, many six year old Windows 7 PCs can still run current software fine. So, there is less incentive to buy a new PC every three years.

                                                            He does 100% of his work on his iPhone and has iPad Pro and claims he is more productive than ever.

                                                            As long as we don’t have statistics over a large population, this is just an anecdote. Of course, there will be some people who use just tablets.

                                                            I think general purpose computers aren’t dying yet, because (1) people keer around and use their old PCs; and (2) cheap Windows laptops are approximately in the same price bracket as usable tablets or Chromebooks. I do agree that usage patterns have changed a lot to move from local applications to cloud applications. So, there could be a rapid change from general purpose computers with a keyboard to computers that only have a browser (and a keyboard).

                                                            I am two-minded about this. For the general population, computing will be safer. Family incidents like malware, lost files, viruses, etc. will be fewer. But it’s indeed also harder for someone who would like to hack on their system to do so.

                                                            [1] http://www.dailytech.com/Its+Official+the+Tablet+Market+is+Stagnant/article37123.htm

                                                      1. 1

                                                        If you have ever used djbdns, you’ve also used ucspi-tcp, which is djb’s take on the super-server idea.

                                                        Almost every time I start writing a Go or node.js program, I start off by writing a tiny server. It’s not a lot of boilerplate, but it’s always there. The self-containment is nice, but I wonder how many times I want to write and tune the same server variations. When it comes to node, the minimal server snippets are nice, but you usually end up with some mixture involving the cluster module, or instrumenting for PM2 or naught. It would be nice to skip all that and focus on writing the fun part of the app, and let some other program worry about the networking.

                                                        1. 4

                                                          I’m going to be tightening up some small utilities I’ve been working on, urp and jarg, mostly to scratch my own itches doing some HTTP plumbing in the shell. urp lets you work with URLs in in a structured way, and jarg provides a shorthand for writing JSON and form-encoded values with a syntax lifted from httpie.

                                                          1. 2

                                                            Cool! I assume you’ve seen jq?

                                                            1. 3

                                                              jq is great. I also use pup when I need to do the same for HTML. Another thing I use is pygmentize, for highlighting and formatting. I have a bunch of little glue functions and aliases around them and cURL. Most of my work is developing HTTP services but I interact with them almost entirely through terminals, I want to put together a quick writeup about what that setup looks like.

                                                              1. 2

                                                                jq and pygmentize are awesome. I hadn’t seen pup before, thanks!

                                                          1. 7

                                                            Check out column-oriented databases. One of the strengths of column stores are their ability to efficiently perform these types of aggregations. C-Store and the related papers are a good place to start. The Design and Implementation of Modern Column-Oriented Database Systems is good too. kdb+ is a very highly regarded in-memory column store used in the financial sector, there is a free version you can play around with.

                                                            1. 4

                                                              I use HTTPie when interacting with HTTP. Even in the simple case, http example.com, it adds syntax highlighting to the command-line output. And for API interaction, it requires memorizing fewer command-line flags: compare

                                                              curl --user "smparkes" \
                                                                   --request POST \
                                                                   --data '{"issue": "15", "head": "smparkes:synchrony", "base": "master"}' \
                                                                   https://api.github.com/repos/technoweenie/faraday/pulls
                                                              

                                                              to

                                                              http --auth "smparkes" \
                                                                   POST \
                                                                   https://api.github.com/repos/technoweenie/faraday/pulls \
                                                                   issue=15 head=smparkes:synchrony base=master
                                                              

                                                              HTTPie supports downloading a file with http --download example.com too, though it’s not especially better than curl -O for that use-case.

                                                              1. 3

                                                                HTTPie’s shorthand for JSON is real nice, but I love cURL. So I just hacked together jarg so that I could have that in my own scripts that have to shovel around JSON. You can tighten up the curl invoke a little more now:

                                                                curl -H "Content-Type: application/json" \
                                                                    https://smparkes:@api.github.com/repos/technoweenie/faraday/pulls \
                                                                    -d "$(jarg issue=15 head=smparkes:synchrony base=master)"
                                                                

                                                                HTTPie is also nice in setting the content type to JSON for you, it’s a pain in cURL. You can fix it with a quick alias though:

                                                                alias curl-json="curl -H'Content-Type: application/json'"
                                                                
                                                                1. 3

                                                                  If you don’t mind Python or another high-level language as a dependency, there are a lot of nice curl alternatives. Requiring only libc and being included in many base systems is one of many nice things about curl.

                                                                  Amusingly, the whole article could’ve been shortened to, “It’s cat for URLs.”

                                                                1. 7

                                                                  Absolutely fantastic advice, particularly with regard to “perks” - that free case of beer every week in the office, the Xbox in the lounge, the Aeron chair at your desk, they all sound really great until you realize that you’re talking ~$1500/year and you just gave up $15,000/year in potential salary requirements over them. You can buy all the beer you want with more money.

                                                                  1. 6

                                                                    Money: The Unit of Caring is an interesting essay along those lines.

                                                                    1. 3

                                                                      Look at it from the opposite point of view. Suppose you’re working for a company that could have made your work life substantially more pleasant by buying a (tax-deductible) Aeron chair, Kinesis keyboard, large monitor, ping-pong table, and beer (if that’s what you’re into; I’m not). But the company decided not to do that, because it would save them $1500 a year. After all, if you really want those things you can spend some of your post-tax salary on them! What does that tell you about the company? It tells you it’s run by people who don’t see themselves as being on the same team you’re on. Rather than considering the company’s employers and investors as a group of people contributing different, necessary contributions to produce something valuable and distribute shares of the value thoughtfully and fairly, that management is more interested in playing the zero-sum game of making sure that as much as possible of whatever value gets produced gets allocated to somebody other than you, even if that’s at the cost of producing less value. [See note at bottom.]

                                                                      It tells you the same thing, of course, if they expect you to drop your salary requirements by $15K in exchange for $1.5K of perks. But my experience is that the companies with the “perks” are usually the companies that pay more, too, because the same mentality that leads companies to skimp on chairs also leads them to skimp on salaries.

                                                                      There may be other reasons a company with that kind of management is a fantastic place to work. The people you work with directly matter a lot more! But you have to keep in the back of your mind that it’s a company where the management is trying to rip you off. It may well be the kind of place where management will consider drinking beer at work that you bought yourself to be a sort of offense against the company.

                                                                      What do I mean by “even at the cost of producing less value”? Think about it: it makes no sense to buy a US$1500 Aeron chair by first paying US$133 in federal payroll tax, then giving US$2007 nominally to the employee, with another US$133 in payroll tax deducted, plus another US$374 for income tax withholding, FICA, and Social Security, leaving the employee with US$1500 to spend on the chair, rather than simply spending the US$1500 directly, at which point you don’t owe any tax at all on it. And it also makes no sense to buy a US$300 office chair instead of a US$1500 Aeron chair for a programmer you’re paying US$120 000 per year. If you depreciate that US$1200 over the GAAP-compliant 7 years, that’s US$171 per year: 0.14% of the employee’s salary. Let’s say that employee produces US$400 000 per year in value for the company, or US$200 per hour. If that chair enables the employee to produce an extra hour’s worth of work per year, because they’re less tired and hurt less, the company is getting more value from the chair than it’s spending.

                                                                      1. 2

                                                                        This is a scam used by almost every cool startup on their young employees.

                                                                        Having been burned by this before, it is a tough lesson to internalize. The perks do not add up to the lost income. It is just a way for an employer to cut costs.

                                                                        1. 7

                                                                          I think just comparing the dollar value of the perks against lost income isn’t really fair. Perks are a signal that company cultures values and encourages the use of those perks. We could look at a company cafeteria as a sinister ploy to keep you in the office longer, but the fact that the place is nice enough that people will stick around says a lot.

                                                                          It’s possible to make an informed trade of income for culture without being ripped off.

                                                                          1. 2

                                                                            It’s possible to make an informed trade of income for culture without being ripped off.

                                                                            Absolutely agreed. I just think that, for particularly very young engineers (say, right out of college or graduate school), they aren’t able to make an informed decision yet due to lack of experience. Culture is valuable; but even when beer and Xbox etc are symptoms of that culture, they’re just symptoms. It’s tricky for a novice to the world of professional employment to make that evaluation - and many of them are fooled.

                                                                        2. 2

                                                                          It might still sound great even after considering the pay difference. The benefits for the employer are obvious, it’s up to the employee to decide if what they get out of the deal is worth the pay cut. Would you be comfortable with a lower salary to be able to take an hour off in the middle of the day to sit down with a beer and play some Towerfall, or would you take the extra 15k to sit in a cube all year? Maybe if negotiation skills as important as the article states, you can have your cake and eat it too.

                                                                        1. 3

                                                                          Shorter is not always better. Giving a random Ruby developer the final code will just confuse him/her. Giving him/her the original code, without comments, (s)he would understand just fine.

                                                                          Using these default names and a bunch of flags most people don’t use do nothing but make the result harder to read and hide the program behaviour, just to save a couple of bytes on the hard disk.

                                                                          1. 4

                                                                            It is super useful for people who are working within the the larger ecosystem. Exposing the “random Ruby developer” to these idioms, like the -F switch and the idea of field and record separators, will get them more comfortable with record processing in their shell. Most famously, awk also uses -F to specify the input field separator, and perl’s -F switch accomplishes the same thing.

                                                                            1. 1

                                                                              It’s not a question of whether shorter is better.

                                                                              The article starts with transparent, portable, but verbose code and ends up with opaque, somewhat (though not completely, as @jdp pointed out) ruby-specific, but compact code. Both versions have their advantages, and showing the gradual progression from one to the other is a good way to teach some valuable command-line techniques.

                                                                            1. 5

                                                                              AGH! The site is up now! http://atom.io/

                                                                              1. 3

                                                                                Looks cool and all, but that config.cson file stood out to me, so I looked it up. It’s CoffeeScript object notation. What is the point of that? JSON is already a poor configuration language, dressing it up in CoffeeScript syntax doesn’t do much to improve the situation. Why not just use YAML?

                                                                                1. 1

                                                                                  Yeah, all the language plugins are in that too. I think they’re really just going for consistency–the whole thing is coded in CoffeeScript.

                                                                                  1. 1

                                                                                    Maybe. In that case, it’s weird that it uses Less for theming, I would have thought that Sass made more sense given the rest of the project’s coding style.

                                                                                2. 2

                                                                                  And the blog entry http://blog.atom.io/

                                                                                1. 7

                                                                                  If you want to limit the discussion to CSP, you should check out libtask. It’s a pretty simple drop-in library for coroutines by Russ Cox and released around 2005. There’s a lot of similarities between the prime sieve example (coros communicate over buffered channels!) and how you might write it in Go.

                                                                                  1. 3

                                                                                    I just signed up, and am using it in parallel with GA for now for my personal site. I really like the presentation, it’s leagues better than the tons of links that GA requires clicking before to getting to anything interesting. How often are the reports updated?

                                                                                    1. 2

                                                                                      Reports are updated whenever you load the page! I’m shooting for the lowest latency possible between tracking a visit and that visit’s data being available in the system.

                                                                                    1. 1

                                                                                      I rarely remember what our latest release was. I keep this around under git-latest-tag so I can quickly check. It relies on GNU sort’s version sort, so for Mac users gsort is available on Homebrew in the coreutils package.

                                                                                      #!/bin/sh
                                                                                      PREFIX="$1"
                                                                                      git tag | grep "^$PREFIX[0-9]" | gsort -V | tail -n1
                                                                                      

                                                                                      I also spend a lot of time managing Redis instances, so I keep this in redis-delkeys. It removes all the keys in a database matching a pattern. It takes the same switches as redis-cli for host, port, and db number. I have a couple variants of it for different tasks (moving keys between two different instances, etc.), but they all have this general structure.

                                                                                      #!/bin/sh
                                                                                      set -e
                                                                                      HOST="localhost"
                                                                                      PORT="6379"
                                                                                      DB="0"
                                                                                      while getopts "h:p:n:" opt; do
                                                                                          case $opt in
                                                                                              h)  HOST=$OPTARG;;
                                                                                              p)  PORT=$OPTARG;;
                                                                                              n)  DB=$OPTARG;;
                                                                                              \?) echo "invalid option: -$OPTARG" >&2; exit 1;;
                                                                                          esac
                                                                                      done
                                                                                      shift $(( $OPTIND -1 ))
                                                                                      PATTERN="$@"
                                                                                      if [ -z "$PATTERN" ]; then
                                                                                          echo "pattern required" >&2
                                                                                          exit 2
                                                                                      fi
                                                                                      redis-cli -h $HOST -p $PORT -n $DB --raw keys $PATTERN |
                                                                                          xargs redis-cli -h $HOST -p $PORT -n $DB del
                                                                                      
                                                                                      1. 5

                                                                                        Implement a regular expression matcher. Here’s a good treatment of it: http://www.cs.princeton.edu/courses/archive/spr09/cos333/beautiful.html

                                                                                        Write a Brainfuck interpreter. Then write a Brainfuck compiler and have it emit the language du jour. Here’s my try at a compiler in Python that emits C: http://justinpoliey.com/articles/2012-03-30-source-to-source-compilation.html

                                                                                        1. 2

                                                                                          For anyone interested this is a chapter from a book called Beautiful Code which I really recommend reading. Its full of some great examples like this one and some deep insight.