1. 17
  1.  

  2. 8

    One of the things I like about programming in luajit is it lets me care about these things when I want to. I recently spent some time microoptimizing a few FFI bindings to reuse memory instead of creating fresh garbage on every call. This is probably of limited benefit considering I’m hemorrhaging memory elsewhere, but… it irks me to see low level code be wasteful. At least I know if I spent the effort to make the application layer more efficient, it wouldn’t be wasted effort because of inherent inefficiencies at the library level.

    1. 2

      it irks me to see low level code be wasteful.

      Have you seen the proposed GC for future Luajit? This should make the already fantastic Luajit even less wasteful, and (edit: insert more) efficient. But, consider yourself already in waaaay better shape than most of the developers this post questions. :)

      1. 1

        I don’t think Ocaml is as good at this, but this is something I really like about Ocaml as well. The memory model is very easy to understand which makes memory consumption easy to predict, it also has well-known tricks (and not many of them) for reducing memory usage and indirection. Seems like this is coming back with Go and Rust, as well.

      2. 3

        When 4GB of ram became the minimum and 16GB or more is relatively common especially on servers and gaming computers. When languages became simpler and allowed the programmer to mostly forget about memory management. I think JavaScript is a prime example of this. Sure you can run into memory issues, but the vast majority of websites doing simple “is there valid data in this field?” queries is generally not going to run into them.

        1. 15

          Careful. This is how people spend five figures on heroku to support 100 users.

          1. 8

            Puts on my Heroku hat

            I couldn’t agree more, though, I must point out (and I know it’s not your point), that it’s not just Heroku. People generally are over provisioned for the amount of work that their applications do, due to sloppiness, inefficient code, and general posture that “adding more hardware is cheaper than hiring more programmers and optimizing stuff.”

            Despite the fact that I write a lot of Go, I don’t particularly like it, or think it’s a great language, but I’m glad it’s getting some mind share if only for the reason that suddenly (part of) the world is seeing sanely again. You shouldn’t need 50 app servers to serve 1,000 r/s. And posts like this are super refreshing as someone who has sighed at every “How to increase request concurrency by using Unicorn” and similar posts.

            Furthermore, I encountered something today that was dragging down an application that I’m working on (in Go, running on Heroku). My co-developer introduced a library to handle “graceful” termination of our HTTP server. This added a bunch of complexity to the listenAndServe loop, but upon code review it didn’t seem to be doing all that much, so I +1’d its inclusion. Well, it turns out that for the last month we’ve been blaming another team for our capacity problems, when in fact, this library, which was apparently not adequately tested under the amount of load we’re seeing, really just crippled our application to the point where the kernel would accept connections and the Heroku router would time them out (30 seconds) before the socket was even passed off to read the request. The previous way we handled “graceful” termination was less than 10 lines and suffers no such issues. The point? Some of this inefficient code that leads to five figure spends on Heroku, could easily be avoided if a developer just spent a couple hours hacking out a simpler solution instead of adopting some half baked library that promises to solve all their problems.

            1. 1

              I blame “Agile”, open-plan offices, and the commodity-programmer culture (all of those kind of lump together), and not particular languages or tools, for the epidemic of five- and six-figure and seven-figure monthly AWS and Heroku bills. Sometimes, the right approach is to ignore memory management and get a simple prototype out quickly, and sometimes, it’s worthwhile to start profiling and optimizing and paying attention to the heap-usage graphs.

              The problem isn’t that we’ve forgotten one level or another of focus and abstraction, but that our industry is flooded with people who don’t know how to tell the difference between the case for one and the case for the other. Commodity, line-of-business, ScrumDrones are always bad programmers, but they tend to be bad in different ways; I’ve met some who never leave the browser and I’ve met others who can optimize Java code and tune JVMs, but have no architectural sense.

            2. 7

              Hah, so, funny story…I actually had an intern implement manual typed-array memory management in Javascript, because reasons. Called it a colosseum, because it was a collection of arenas, har har.

              The inability of JS to actually report memory usage for our use case drove us to that. We needed (similar to how games do) to have complete control of slabs of memory to make damned sure that future requests wouldn’t fail.

              EDIT: The intern was and still is an absolutely brilliant programmer. I hope that they don’t waste their talents in either academia or the current app gold rush.

              1. 5

                People stopped caring about memory long before we got 4GB of RAM in our systems. I’d put it somewhere around the popularization of Java for servers and somewhere around jquery’s explosion in popularity for desktops. Once people are given languages that purport to do the memory management for you, they very quickly forget that the machine even has memory. Then when GCs pausing to collect start driving people mad, tuners lower the rate of GC and crank up the need for the system to have gobs and gobs of memory around.

                It’s not that there isn’t a better way to do these things, it’s that for a long time now memory has been pretty fungible as a resource - you can practically always have more of it if you need it. What people have really been bad about though is realizing that going to memory actually has a pretty significant cost associated with it, and now that (solid state) disks are finally starting to catch up to memory in speed, that cost is starting to show up again in our software.

                1. 2

                  I think this is true for many people, but it ends with The Tragedy of the Commons.