1. 23

I completely disagree with the author’s implication that it’s not wise to bet $100,000,000 on a less mainstream language, but I found this to be worth discussion, as strongly as I disagree.

  1.  

  2. 16

    I always find this author to advise being profoundly limited and limiting. It irritates me so bad that I haven’t read his writings for years. He’s not wrong in this particular engineering discussion, but he doesn’t encourage moving out of the box.

    I’m not really into this industry to ship cookie cutter product efficiently with proven tech: I’m into using the best technology to build - hopefully - cutting edge stuff. Hacking, if you will. :)

    1. 10

      i have the opposite feeling - i’m personally a huge fan of most of his writings, but i think this one missed the mark badly. it’s analogous to the difference between the one-shot and iterated prisoner’s dilemma - sure, if i had one high-stakes project to deliver in a short time frame, it would make more sense to use a language with a strong userbase, for all the ecosystemic stuff that brings you. but if you have to use a language for many projects, over several years, it definitely makes sense to pick a good language and invest in helping to develop the ecosystem yourself. if nothing else, it prevents you getting stuck in a local maximum.

      this quote from shaw also comes to mind:

      The reasonable man adapts himself to the world; the unreasonable one persists in trying to adapt the world to himself. Therefore all progress depends on the unreasonable man.

    2. 11

      The fallacy here is that you have to allocate all $100,000,000 toward one technology. Any project of that scale is inevitably a combination of many technologies, some offering higher “returns” and others that are less risky that you can hedge on. Good software architecture finds the efficient frontier in this space, and there are many combinations of optimal technology allocations.

      This article taken to it’s logical conclusion is absurd. The argument to only invest development time on extremely low-risk technologies (i.e. C) is simply not optimal. It’s also an argument against all progress in our profession.

      1. 7

        Any project of that scale is inevitably a combination of many technologies, some offering higher “returns” and others that are less risky that you can hedge on. Good software architecture finds the efficient frontier in this space, and there are many combinations of optimal technology allocations.

        A good (but small, admittedly) example here is people using Coq to prove stuff correct for a Haskell library. Or using TLA+ to validate a distributed system written in Java. etc. etc.

        1. 4

          I agree and, also, the “bet $100,000,000 on $LANGUAGE” premise is absurd, because it ignores the question of what the problem is. That has a major effect on (a) which languages are suitable, (b) which languages are best, and © the degree (if any) to which the project demands commitment to one or a small number of languages.

          Would I bet $100M that Haskell can replace C for ultra-low-latency trading? No, I wouldn’t. I’d use C and hire C programmers and stay out of their way rather than trying to convince them to use a garbage-collected language (even if it’s a very good one).

          Would I bet $100M that Haskell can outperform the mainstream languages (Java, Ruby) in terms of output/effort ratio when all costs (including maintenance and technical debt) are included? Absolutely. And if I were starting the company from scratch, I’d seriously consider Haskell (although I wouldn’t try to turn a productive Ruby shop into a Haskell shop). So, in that sense, I would bet a company on it.

          If it were $100, my bets would be the same, and where OP is right is where he says that we shouldn’t typically use different tools for small bets from what we’d use for big bets.

          1. 4

            Would I bet $100M that Haskell can replace C for ultra-low-latency trading?

            C maybe has a tighter absolute limit on what you can do here, but in practical terms, Haskell has served people well in latency-sensitive trading and ad-tech. Bitta hackery involved if you start really nailing stuff down (explicit thread yields, GCs, etc.).

            Anthony Cowley programs robots with Haskell. Only special treatment required when I talked to him has been that he forces GC when he knows he has some time in a read/respond loop. This stuff works better for those use-cases than people realize, I think.

            1. 1

              Whats latency sensitive about ad tech?

              1. 6

                Quite a lot, it’s not very low latency next to HFT, but the goal of an ad-network is to have large network, of course, and do realtime bidding/calculations to determine the best ad to show at that particular moment, and do it at quickly so the page loads faster.

                1. 1

                  I forgot about bidding, I was thinking more of just needing fast loading ads. Thanks.

                  Thanks to bitemyapp and shanemhansen as well.

                2. 1

                  Whats latency sensitive about ad tech?

                  Compared to HFT, nothing at all, but there’s a push towards having more responsive ad-loads that aren’t as disruptive. Ads that pop in noticeably later than the content is obnoxious. Better conversion rates if you’re prompt.

                  Also if you’re participating in a bidding system, being fast pays.

                  1. 1

                    When you are bidding for a spot you have to bid and win an auction in less than 100ms and your answers have to be good enough to be profitable. 100ms is not that much time if you want to make a decision based on the history of all the user’s interactions (products viewed, orders placed).

                3. 4

                  Ultra-low-latency trading has been using FPGAs for years now. So, even they wouldn’t bet on C. But for normal low-latency stuff, C++ (I’ve never heard of anyone using plain jane C) is fine. But so are OCaml and friends too. Even still, this is still incredibly niche. Just because there’s a lot of money there doesn’t make it generally applicable.

                  Haskell is used in real trading systems. So I’d make that bet too. It’s already been proven, multiple times.

                  1. 2

                    Would I bet $100M that Haskell can replace C for ultra-low-latency trading? No, I wouldn’t. Jane Street seems to be doing just fine with OCaml

                    1. 3

                      I do not work at Jane Street so I do not know what their latency numbers actually are, but the limited information I’m aware of about them suggests that they are not “ultra-low-latency trading”, and Jane Street are probably about an order of magnitude away from those groups, which is an eternity in HFT. So we’re talking about

                    2. 1

                      True, I wouldn’t build the whole trading system on top of a managed runtime. Though every production trading system I’ve worked on is a combination of like 6 technologies, Java for the OMS integration and message bus, C++ for the execution models, and some high level orchestration in Python/Scala/Haskell and probably some Excel for trade analytics.

                      Again that’s why the article doesn’t make sense, it presents this premise of “all or nothing” bet which never happens.

                  2. 6

                    Seeing as it’s been written in 2007, I’m not sure if the author’s ideas have changed, but a few things to consider:

                    Python and Erlang get immediate boosts for having been used in large commercial projects

                    Erlang is interesting here because it (mostly) literally is a $100m bet on a pet programming language. The legend is that it was developed in Ericsson’s CSLab and ended up beating out the C++ software. A physical product that governments buy and has to run for decades was built with it.

                    I’d become very open to writing key parts of an application in C, because that puts the final say on overall data sizes back in my control, instead finding out much later that the language system designer made choices about tagging and alignment and garbage collection that are at odds with my end goals.

                    It’d be great to know if the author still agrees with this and what “key parts” means. I believe it’s important that one pick a language that interops with C easily, but writing in C is unsafe with marginal benefits at this point in time. It depends on the problem but there are really very few problems that are solved better in C, these days.

                    The problem is that “float” in OCaml means “double.” In C it would be a snap to switch from the 64-bit double type to single precision 32-bit floats, instantly saving hundreds of megabytes.

                    This scenario is presented as though this is free. C comes with a whole host of other problems that will just be solved for you in something like Ocaml. On top of that, Ocaml does have pretty good FFI into C, so you can always make an opaque type to do this bit of work, protected in a safe Ocaml shell. So one doesn’t have to change the Ocaml compiler really, they can just use C for the portion that needs C and be safe otherwise. I honeslty don’t remember what programming in 2007 was like (it all blurs together) so I don’t know where this attitude was at that point.

                    To defend Ocaml, my language of choice, it has a great property where the translation between the Ocaml code and the machine code is fairly understandable, meaning you can understand what your program is doing at runtime. Haskell, for all its strengths, is much less easy to understand, IMO. This gives you a lot of the value peopel want out of C, understanding what happens at runtime.

                    Libraries are much more important than core language features.

                    This hasn’t completely matched my experiences. I have worked on large projects in Erlang, Python and Java. And my personal work is in Ocaml. Java, by far, has the most library and tooling around it and it has been the most difficult, for me, to complete work in. Despite having to write a lot of libraries and tooling in Erlang, the features of the language make it an incredibly productive language to work in. It’s also a very safe language to work in, both in terms of memory safety and the process isolation at the language level makes it easy to isolate crappy libraries from the rest of the program. Most of the libraries are very light and non-intrusive. Ocaml, I have found, I am very productive in. It does not have the isolation that Erlang gives, but I have found that if a library doesn’t exist in Ocaml, writing it is really not that difficult. It depends, I mostly do backend things which don’t really involve interacting with the OS in a sophisticated way like a GUI toolkit, so that helps. But accomplishing the same work in Java has been very difficult for me, often getting stuck solving way a library does not work rather than solving the problem I actually have.

                    1. 3

                      To defend Ocaml, my language of choice, it has a great property where the translation between the Ocaml code and the machine code is fairly understandable, meaning you can understand what your program is doing at runtime.

                      This might change a bit with the compiler beginning to pick up some optimizations, like in the flambda branch. Overall I think OCaml will continue to exhibit an easier to understand runtime behaviour compared to Haskell.

                    2. 5

                      The idea that the decision procedure you use when presented with an opportunity to make $100,000,000 should be the same as that you use every day is farcical.

                      1. 2

                        Isn’t that the point though? The huge payout is to highlight the cost of failure. It’s reasonable to determine that failure is an acceptable outcome, but it’s a lot less reasonable (imo) to disregard the possibility entirely. That’s how a lot of junk gets built.

                        Your decision process should be the same. The decision made can vary based on varying inputs, but the process should not.

                      2. 3

                        As I understand, Servo is basically a $100,000,000 bet on Rust.

                        1. 1

                          I don’t think Mozilla has that kind of folding money.

                          1. 5

                            In OP, the money refers to payout, not investment. Mozilla isn’t betting $100,000,000, but is betting on Rust where potential payout is $100,000,000.

                            1. 1

                              Let me rephrase: I don’t think we live in a world where any decision Mozilla takes has anything like a $100,000,000 impact on anything. They are a spent force.

                        2. 2

                          With $100,000,000 on the line, I absolutely wouldn’t be choosing my programming language on the basis that it might let me save a factor of 2 of memory usage. That wouldn’t even be in my top ten criteria. With $100,000,000 I can, worst case, buy twice as much hardware. The author is focusing on one particular failure scenario because it’s easy to imagine and/or favours his own pet language, but the proportion of programming projects that fail that would have succeeded if they could only halve the memory requirement is microscopic.

                          The risk/reward for $100,000,000 is different, so of course it makes sense to use different approaches. Suppose that doing a project in language A will have an 80% chance of success and cost $10,000, and doing the project in language B will have a 99% chance of success and cost $100,000. If the payoff is $20,000, of course you choose language A. If the payoff is $100,000,000, of course you choose language B.

                          (FWIW: 2-4 years ago I would’ve picked Scala for a $100,000,000 project, and I would have been right. Today I would probably still pick Scala, but I would seriously consider Idris).

                          1. 3

                            Yeah, on the risk/reward side, if anything I would be more open to “unusual” proposed solutions with $100m than with $10k. With a small-budget project, non-recurring engineering expenses eat you alive, even fairly modest ones. So it’s a huge risk to use any infrastructure that isn’t tried-and-true, ideally in something very close to exactly what you want to do with it. If you run into brokenness or missing functionality, you can’t really afford to fix it. With an $100m budget, things look a lot different; little bits of brokenness or missing functionality that require $10k or $20k of engineering effort to fix are no big deal if the infrastructure is otherwise a better overall choice.

                          2. 1

                            A lot of this depends on what your budget is for the possibility of getting that $100,000,000. If you assume that the projected cost of the project is nearly $100,000,000, which is mostly projected to be eaten by programming and qa costs - that is, this is a lowish margin consulting shop, and a lot of your ‘margins’ are eaten up by paying people, but you are not randomly responsible for hardware, deployment, sales etc - then spending ten to twenty million on staff to do compiler development, and develop libraries, tooling and infrastructure is a no brainer which not only solves the problem for this job, but for many future jobs. At that point you just have to ask if the pet programming language (plus 20 million of support) will do better with nearly 80 million worth of programming, debugging, qa etc than a non-pet language would do with nearly 100 million worth. In my case, I under those constraints, I would generally choose my pet language. I might even allocate it closer to a 50/50 split in budgeting.