1. 21
  1.  

  2. 9

    It’s a neat language that I hope to see more of. That said, I haven’t seen any evidence of this:

    “but the argument is that the increase in productivity and reduction of friction when memory-safe mechanisms are absent more than make up for the time lost in tracking down errors, especially when good programmers tend to produce relatively few errors.”

    I have seen studies show safe, high-level programming give productivity boosts. I’ve also seen a few that let programmers drop into unsafe, manual control where they want with that wrapped in a safe or high-level way. Combining these should probably be the default approach unless one can justify benefits of not doing so. Also, we’ve seen some nice case studies recently of game developers getting positive results trying Design-by-Contract and Rust. If author was right, such things would’ve slowed them down with no benefits. Similarly for the game developers that are using Nim with its controllable, low-latency GC.

    1. 4

      It’s not mentioned in the primer but the compiler has built in support for linting. You can access to the AST during compilation and stop the build, so rather than enforcing good practice by shoehorning it into types you can enforce good practice by directly checking for misuse.

      I do wonder if people will just end up badly implementing their own type systems on top of that though.

      1. 2

        That’s exactly the kind of thing I was describing in my response to akkartik. Neat stuff.

      2. 1

        You’re making some big claims here (even a normative statement), seemingly without much to back it up.

        I have seen studies show safe, high-level programming give productivity boosts.

        Too general. For some tasks high-level languages are clearly desirable, for others they clearly don’t work at all. The question at hand is which level of abstraction is desirable for writing/maintaining a large, complex codebase that has to reliable mangle a huge amount of data about 60 times per second. If a study does not replicate these conditions, it is worthless for answering the question.

        I’ve also seen a few that let programmers drop into unsafe, manual control where they want with that wrapped in a safe or high-level way. Combining these should probably be the default approach unless one can justify benefits of not doing so.

        Does writing your engine in C(++) and slapping Lua on top for the game logic count? Many games work like that, it pretty much is the default approach; no immature, unproven languages needed.

        Also, we’ve seen some nice case studies recently of game developers getting positive results trying Design-by-Contract and Rust.

        Unless a serious game in Rust has actually been released and enjoyed some success, we haven’t. A lone Rust enthusiast playing around for a few month and writing a blog post about it does not tell us a whole lot about how Rust might fare under realistic conditions.

        If author was right, such things would’ve slowed them down with no benefits.

        What makes you so sure they haven’t?

        Every time I read posts like yours I wonder if Rust evangelists and other C haters ever seriously ponder why people use and like C, why it became the giant on whose shoulders so much of modern software stands. I’m not claiming it all comes down to technical superiority, but I do think there is a reason C has stood the test of time like no other programming language in computing’s (admittedly not very long) history. And it certainly wasn’t for a lack of competition.

        Edit: I was reminded of Some Were Meant for C, which should be required reading for anyone developing a new “systems” language.

        1. 3

          “Every time I read posts like yours I wonder if Rust evangelists and other C haters ever seriously ponder why people use and like C, why it became the giant on whose shoulders so much of modern software stands.”

          It’s rare I get accused of not reading enough history or CompSci papers on programming. I post them here constantly. On the contrary, it seems most C programmers didn’t study the history or design of the language since most can’t even correctly answer “what language invented the philosophy of being relatively small, efficiency focused, and the ‘programmer is in control’ allowing direct manipulation of memory?” They’ll almost always say C sine their beliefs come from stories people told them instead of historical research. Correct answer was BCPL: the predecessor that C lifted its key traits off of. Both were designed almost exclusively due to extremely-limited hardware of the time. They included just what could run and compile on weak hardware modified with arbitrary, personal preferences of the creators. Here’s a summary of a presentation that goes paper by paper studying the evolution of those designs. The presentation is Vimeo link in references. Only thing I retract so far is “not designed for portability” since a C fan got me a reference where they claimed that goal after the fact.

          “You’re making some big claims here (even a normative statement), seemingly without much to back it up.”

          After accusing me of doing no research, you ironically don’t know about the prior studies comparing languages on traits like these. There were many studies done in research literature. Despite the variance in them, certain things were consistent. Of the high-level languages, they all killed C on productivity, defect rate, and maintenance. Ada usually beat C++, too, except in one study I saw. Courtesy of Derek Jones, my favorite C vs Ada study is this since it’s so apples to apples. Here’s one from patrickdlogan where they do same system in 10 languages with everything that’s been mainstream beating C in productivity. Smalltalk leaves all them in the dust. The LISP studies showed similar boost in development speed. Both supporting quick iterations, working with live image/data, and great debugging help with that. LISP also has its full-strength, easy-to-run macros with their benefits. Example of benefits of LISP-based OS and Smalltalk live-editing (search for “Apple” for part about Jobs’ visit).

          Hell, I don’t have to speculate to you given two people implemented C/C++ subset in a Scheme in relatively short time maintaining the benefits of both. Between that, Chez Scheme running on a Z80, and PreSceme alternative to C, that kind of shows that we’ve been able to do even what C does with more productivity, safer-by-default, easier debugging, and easier portability since somewhere from the 1980’s to mid-to-late-1990’s depending on what you desired. If a modern take, one could do something like ZL above in Racket Scheme or IDE-supported Common LISP. You’d get faster iterations and more productivity, esp extensibility, due to better design for these things. Good luck doing it with C’s macros and semantics. That combo is so difficult it just got a formal specification w/ undefined behavior a few years ago (KCC) despite folks wanting to do that for decades. The small LISP’s and Pascals had that stuff in the 1980-1990’s due to being designed for easier analysis. Other tech have been beating it on predictable concurrency such as Concurrent Pascal, Ada Ravenscar, Eiffel SCOOP, and recently Rust. Various DSL’s and parallel languages do easier parallel code. They could’ve been a DSL in the better-designed C which is approach some projects in Rust and Nim are taking due to macro support.

          So, we’re not knocking it over speculative weaknesses. It was a great bit of hackery back in the early 1970’s for getting work done on a machine that was a piece of shit using borrowed tech from a machine that was a bigger piece of shit. Hundreds of person-years of investment into compiler and processor optimizations have kept it pretty fast, too. None of this changes that it has bad design relative to competitors in many ways. None of it changes that C got smashed every time it went against another language balancing productivity, defect rate, and ease of non-breaking changes. The picture is worse when you realize that people building that ecosystem could’ve built a better language that FFI’d and compiled to C as an option to get its ecosystem benefits without its problems. That means what preserves C is basically a mix of inertia, social, and economic factors having about nothing to do with its “design.” And if I wanted performance, I’d look into synthesis, superoptimizers, and DSL’s designed to take advantage of modern, semi-parallel hardware. C fails on that, too, these days that most people code that stuff in assembly by hand. Those other things have recently been outperforming them, too, in some areas.

          There’s not a technical reason left for its superiority. It’s just social and economic factors at this point driving the use of A Bad Language for today’s needs. Better to build, use, and/or improve A Good Language using C only where you absolutely have to. Or want to if you simply like it for personal preference.

          @nullp0tr @friendlysock I responded here since it was original article that led to re-post of the other one. I’d just be reusing a lot of same points. I also countered the other article in the original submission akkartik linked to in that thread.

          1. 3

            It’s rare I get accused of not reading enough history or CompSci papers on programming.

            And I didn’t. I’m aware of your posts and interests. Rather, I’m accusing you of reading too much and doing too little. In the real world, C dominates. Yet studies have found only disadvantages. Clearly the studies are missing something fundamental. Instead of looking for that missing something, you keep pointing at the studies, as if they contain the answer. They cannot, because their conclusions fly in the face of reality.

            After accusing me of doing no research, you ironically don’t know about the prior studies comparing languages on traits like these.

            I’m aware of (some of) the research. Color me unimpressed. I don’t need a study to know that C is a crappy language for application development. Nor to figure out that C has serious warts and problems all over, even for OS development.

            What I would very much like to see more research on is why C keeps winning anyway. You chalk it up to “socio-economic factors” and call it a day. I call that a job not finished. It doesn’t explain why people use and like products built in C. It doesn’t explain why some very capable engineers defend C with a vigor that puts any Rust evangelist to shame[1]. It doesn’t explain what these socio-economic factors are, but they clearly matter a great deal. Otherwise, we agree, C wouldn’t be where it is today.[2]

            Legacy/being well-established is certainly one factor keeping C alive, but how did it become so dominant? As a computer history buff you might argue that it is because universities and their inhabitants had cheap access to UNIX systems back in the day and so students learned C and stuck to it when they graduated. There’s a lesson in that argument: having experience in a programming language is a very important factor in how productive you will be in it, which means throwing that experience away by using a very different language is a huge loss. Those who earn their bread and butter by having and using that experience will be very reluctant to do so. Most working programmers simply can/will not afford to learn a radically different language. It is an even bigger loss to society at large than to individuals because you loose teachers, senior engineers and others who pass on experience too. There are other network/feedback effects (ignoring the obvious technical ones), but I think you get the point.

            The consequence is that small incremental improvements on existing (and proven) technology are vastly preferable to top-down fundamental redesigns, even if that means the end result isn’t anywhere close to pretty. This has been proven again and again across industries and live in general. x86 is another prominent example.

            And then there is Go. Here’s a language that is built by people who understand C and the need to ease adoption. And look! People are actually using it! Go has easily replaced more C code on my machines than all the strongly typed, super-duper safe languages combined and probably in the internet-at-large too. And, predictably, the Haskell enthusiasts rage and shake their fists at it, because it isn’t the panacea they imagine their favorite strongly-typed language to be. Or in other words: because they do not understand, and are unwilling to learn, the most basic aspects about what keeps C alive.

            Anyway, I wanted to circle back to your original claim that memory-safety mechanisms do not inhibit productivity in game programming, but this is already pretty long and I’m hungry. Maybe I’ll write another comment to address that later.

            [1] And they make good on their claims by building things that are actually successful, instead of sticking to cheap talk about the theoretical advantages and showing off their skills at doing weird stuff to type systems.

            [2] That does not mean I don’t believe there are important technical aspects to C which are missing in languages intended to replace it. See the “Some were meant for C” paper.

            1. 0

              “What I would very much like to see more research on is why C keeps winning anyway.”

              It’s not “winning anyway” any more than Windows keeps winning on the desktop, .NET/Java in enterprise, PHP in web, and so on. It got the first mover advantage in its niche. It spread like wildfire via UNIX and other platforms that got huge. It was a mix of availability, hacker culture, and eventually open-source movement. Microsoft and IBM were using monopoly tactics suing or acquiring competitors that copied their stuff with different implementations or challenged them. Even companies like Borland with Pascals outperforming C on balance of productivity and speed saw the writing on the wall. Jumped on the bandwagon adding to it. The momentum and effects of several massive trends moving together who all found common ground in C at that time led to oligopolies of legacy code and decades of data locked into it. They have massive, locked-in base of code exposing C and C++. To work on it or add to it, the easiest route was learning C or C++ which increased the pool of developers even more. It’s self-reinforcing after a certain point.

              I don’t have hard data on why non-UNIX groups made their choices. I know Apple had Pascal and LISP at one point. They eventually went with Objective-C for some reason with probably some C in there in the middle. IBM was using PL/S for some mainframe stuff and C for some other stuff. There’s gaps in what I can explain but each one I see is a company or group jumping on it. Then, it gets more talent, money, and compilers down the line. What explains it is Gabriel’s Worse is Better approach, herd mentality of crowds, network effects, and so on. You can’t replicate the success of C/C++ as is since it appeared at key moments in history that intersected. They won’t repeat. There will be similar opportunities where something outside a language is about to make waves where a new language can itself make waves by being central to or parasite off it. Economics, herd mentality, and network effects tell us it will work even better if it’s built on top of an existing ecosystem. A nice example is Clojure that built something pretty different on top of Java’s ecosystem. Plus all the stuff allowing seemless interoperability with C, building on Javascript in browsers, or for scripting integration with something like Apache/nginx.

              So, it’s pretty clear from the lack of design to the enormous success/innovation to the stagnation of piles of code that C’s gone through some cycle of adoption driven by economics. There’s lasting lessons that successful alternatives are using. Once you know that, though, there’s not much more to learn out of necessity. Plenty for curiosity but the technical state of the art has far advanced. That’s about the best I can tell you about that part of your post after a long day. :)

              “There’s a lesson in that argument: having experience in a programming language is a very important factor in how productive you will be in it, which means throwing that experience away by using a very different language is a huge loss.”

              Now we’re in the “C is crap with certain benefits from its history that might still justify adoption” territory. This is essentially an economic argument saying you’ve invested time in something that will pay dividends or you don’t want to invest or have time to get a new language to that point. You see, if it’s designed right, this isn’t going to hurt you that much. The safe alternatives to C in language and libraries kept all that investment while making some stuff safe-by-default. There’s some languages that are just easy to understand but compile to C. Then, there’s those that are really different that still challenge your claim. The apples-to-apples study I linked on Ada and C specifically was concerned about their C developers doing worse in Ada due to a lack of experience. However, those developers did better since the language basically stopped a lot of problems that even experienced developers kept hitting in C since it couldn’t catch them. At least for that case, this concern was refuted even with a safe, systems language with a steep learning curve.

              So, it’s not always true. It is worth considering carefully, though. Do note that my current advice for C alternatives is keeping close enough to C to capitalize on existing understanding, code, and compilers.

              “Most working programmers simply can/will not afford to learn a radically different language.”

              That’s a hypothesis that’s not proven. If anything, I think the evidence is strongly against you with programmers learning new languages and frameworks all the time to keep their skills relevant. Many are responding positively to the new groups of productive, safe languages such as Go, Rust, and Nim. Those fighting the borrow-checker in Rust about to quit usually chill when I tell them they can just use reference counting on the hard stuff till they figure it out. It’s a fast, safe alternative to Go at that point. If they want, they can also turn off the safety to basically make it a high-level, C alternative. There’s knobs to turn to reduce difficulty of using new, possibly-better tooling.

              “People are actually using it! “

              Go was a language designed by a multi-billion dollar corporation whose team had famous people. The corporation then pushed it strongly. Then, there was adoption. This model previously gave us humongous ecosystems for Java and then C#/.NET. They even gave Java a C-like syntax to increase adoption rate. Go also ironically was based on the simple, safe, GC’d approach of Niklaus Wirth that Pike experienced in Oberon-2. That was a language philosophy C users fought up to that point. Took a celebrity, a big company, and strong focus on tooling to get people to try what was essentially a C-like Oberon or ALGOL68. So, lasting lessons are getting famous people involved, have big companies with platform monopolies pushing it, make it look kind of like things they like, and strong investments in tooling as always.

              “Anyway, I wanted to circle back to your original claim that memory-safety mechanisms do not inhibit productivity in game programming, but this is already pretty long and I’m hungry. Maybe I’ll write another comment to address that later.”

              I’m interested in that. Remember, though, that I’m fine with being practical in an environment where high, resource efficiency takes priority over everything else. In that case, the approach would be safe-by-default with contracts for stuff like range checks or other preconditions. Tests generated from those contracts with automatic checks showing you the failures. If performance testing showed it too slow, then the checks in the fast path can be removed where necessary to speed it up. Throw more automated analysis, testing, or fuzzing at those parts to make up for it. If it’s a GC, there’s low-latency and real-time designs one might consider. My limited experience doing games a long time ago taught me memory pools help in some places. Hell, regions used in some safe languages sound a lot like them.

              So, I’m advocating you use what safety you can by default dialing it down only as necessary to meet your critical requirements. The end result might be way better than, slightly better than, or equivalent to C. It might be worse if it’s some combo of a DSL and assembly for hardware acceleration which simply can have bugs just due to immaturity. Your very own example of Go plus recent ones with Rust show a lot of high-performance, latency-sensitive apps can go safe by default picking unsafety carefully.

      3. 2

        For anyone interested, Jonathan Blow also streams some of the development and posts recording to youtube here!

        1. 2

          I know JB wants nothing to do with interacting with a needy bike-shedding community, but I sure wish he’d at least have a write-only dump of this for people to test-drive if they’re willing to go it alone :-/

          1. 3

            Having seen what happened with Paul Graham’s Arc, my sympathies are entirely with Jonathan Blow. Particularly if the author already has a reputation to lose, a write-only dump is far more of a liability than an asset.

            1. 1

              Yeah I totally get it, I’m just bummed.

          2. 2

            The whole thing is great, but one idea that seems particularly useful in arbitrary languages without regard to how it fits with other features is to specify the list of globals used by a function. In Python-like syntax, imagine this:

            def draw_quad(origin, left, up) [m]:
                m.Position3fv(origin - left - up)
                m.Position3fv(origin + left - up)
                m.Position3fv(origin + left + up)
                m.Position3fv(origin - left + up)
            

            Now you get a guarantee that the function uses zero globals besides m.

            1. 4

              In PHP you have something like that, global variables are not accesible from inside functions unless you specifically allow each one you want

              $m = new M();
              function draw_quad($orgin, $left, $up){
                  global $m; // or $m = $_GLOBALS['m'];
                  $m->Position3fv($origin - $left -$up);
                  $m->Position3fv($origin + $left - $up);
                  $m->Position3fv($origin + $left + $up);
                  $m->Position3fv($origin - $left + $up);
              

              in practice, I haven’t found useful global variables other then the contextual ones ($_POST. $_GET, $_SESSION), which are superglobal and always defined

              1. 2

                I’d like to see something similar but generalised to “contextual” environmental bindings, rather than traditional global vars. And a compiler that ensures that somewhere in all call chains the binding exists. But you might want a global in some cases, or a “threadlocal”, or an “import” of some sort, or something like the context in react, etc.

                Some mechanism in which the compiler makes sure the environmental dependencies are fulfilled, without necessarily requiring that value be propagated explicitly through each owner/container between provider and consumer.

                1. 4

                  I can’t find it but Scala has an extra context var that is passed with function invocation.

                  And early Lisps had dynamic scope, meaning that a var bound to the next occurrence up the call stack.

                  Both of these mechanisms supply the building blocks for AoP, so that a programmer can mixin orthogonal properties.

                  1. 3

                    And early Lisps had dynamic scope, meaning that a var bound to the next occurrence up the call stack.

                    Today they still have it - see DEFVAR and DEFPARAMETER in Common Lisp.

                    1. 2

                      I can’t find it but Scala has an extra context var that is passed with function invocation.

                      In Scala you can use implicit parameters for this:

                      def foo(origin: Vec3, left: Vec3, up: Vec3)(implicit m: Context) {
                          m.Position3fv(origin - left - up)
                          m.Position3fv(origin + left - up)
                          m.Position3fv(origin + left + up)
                          m.Position3fv(origin - left + up)
                      }
                      

                      In Haskell you could use a reader/writer/state monad thingy. In Koka or Eff you could use effects.

                      1. 2

                        Yeah, Scala’s context is probably closest to what I’m thinking of, from what I know of it.

                      2. 4

                        You can kinda get this with effect types. Effect types let you label certain functions as using resource A or B, and then you can have a documentation mechanism for what dependencies are used, without passing stuff around.

                        It can still get a bit heavy (at least it is in Purescript), but less so than dependency injection

                      3. 1

                        A compiler or analyzer should be able to tell you that just from what variables or expressions go into the function. A previously-declared global would be one of the arguments. Why do we need to declare it again in the function definition?

                        1. 1

                          See my final sentence. “Now you get a guarantee that the function uses zero globals besides m.” The program documents/communicates what it uses. The compiler ensures the documentation is always up to date.

                          In my example, m is not one of the arguments. Because m is a global, lexically accessible anyway in the body of the function. There’s no redundancy here.

                          1. 0

                            I’m saying a compiler pass should be able to do the same thing without a language feature. I think it won’t be important to that many people. So, it will probably be optional. If optional, better as a compiler pass or static analysis than a language feature. It might also be an easy analysis.

                            1. 3

                              You’re missing the point. It’s a list of variables that the code can access anyway. What would a compiler gain by analyzing what globals the function accesses?

                              There are many language features that help the compiler do its job. This isn’t one of them. The whole point is documentation.

                              (Yes, you could support disabling the feature, using say syntax like [*]. C++ has this. Just bear in mind that compilers also don’t require code comments, and yet comments are often a good idea. SImilarly, even languages with type inference encourage documenting types in function headers.)

                        2. 1

                          What if the Position3fv method uses global variable n? You also need to specify that for draw_quad. This quickly blows up like checked exceptions in Java and people want shortcuts.

                          1. 2

                            The problem with checked exceptions is that they discourage people from using exceptions. But there’s no problem in discouraging people from using global variables.

                            I don’t mean to imply that I want all code to state what it needs all the time. In general I’m a pretty liberal programmer. It just seems like a good idea to give people the option to create “checkable documentation” about what globals a piece of code requires.

                        Stories with similar links:

                        1. Jai Primer via pushcx 4 years ago | 9 points | 1 comment