1. 4
  1.  

  2. 2

    I appreciate mbenbernard posting here. This is a small example of a really broad problem. If you do networking with untrusted servers (or even trusted servers) you have to ensure your memory usage is bounded and assume the thing on the other side can misbehave.

    This reminds me of someone seeing an OOM error because random HTTP was interpreted as a 32bit number indicating an allocation size https://rachelbythebay.com/w/2016/02/21/malloc/

    There are multiple approaches to getting a good level of reliability. The dead simple one is using processes and relying on linux. It would have been trivial to write an app that does this:

    timeout 30s curl foo.com | readlimit 200kb | ....
    

    While that sort of setup is pretty fun, it’s high overhead. In practice I take the careful coding approach. In golang I generally use a combination of the streaming approach you mentioned (the default in go) combined with a hard limit on I/O https://golang.org/pkg/io/#LimitReader combined with with deadlines on read and write operations https://golang.org/pkg/net/#TCPConn.SetDeadline

    [edit]

    I made up the readlimit program. Writing a program which copies up to N bytes from stdin and writes to stdout would be pretty straightforward.

    1. 2

      Hey thanks for your comments, Shane!

      One must always assume that APIs might break; and I discovered it the hard way with requests.

      I’m curious; when you say that the streaming approach is “the default in go”, do you mean that the default HTTP library of Go automatically handles streaming (transparently)?

    2. 2

      The article talks about Python crashing with an out-of-memory error while crawling a web page. The author presents various fixes to his/her Python code.

      I think really, though, that those aren’t fixes. Those are workarounds to the fundamentally broken nature of memory allocation on Linux. The OOM killer idea is just…I mean, I know that I love having random processes killed at random intervals because some other process was told there was more memory available than there was.

      (Okay, so yeah, the author of TFA shouldn’t have relied on an out-of-memory condition to signal when to stop crawling, but saying that wouldn’t have given me an opportunity to bitch about Linux’s allocator…)

      1. 2

        Thanks for your feedback :)

        I think really, though, that those are fixes.

        An out-of-memory error generally means that there’s either:

        1. Something broken in your code (or in something that your code depends on).
        2. A lack of resources on the system.

        In either case, as the programmer, you’re at fault. And fixes or workarounds will be needed.

        I’m not sure that I agree with you on the premise that memory allocation is broken on Linux. No matter which OS you use, your system resources are limited in some way. Aren’t they?

        But you know what? I’m fairly new to Linux, so I’m really open to different ideas and solutions. What do you think would be a better solution than OOM killer?

        Finally, I agree with you that crashes are not 100% fun to deal with.

        1. 1

          An out-of-memory error generally means that there’s either:

          Something broken in your code (or in something that your code depends on).
          A lack of resources on the system.
          

          In either case, as the programmer, you’re at fault. And fixes or workarounds will be needed.

          Also, as an additional reply: the way Linux does it, sometimes it’s not your fault. Linux’s default allocation strategy lies to you: it tells you that resources you reserved were in fact successfully reserved for you when they’re not.

          1. 1

            I’ve read your reply below and given that OOM killer can kill the wrong process sometimes, you’re right in a way.

            However, could we say that it’s the programmer’s fault for not planning enough resources on the system? I tend to think so.

          2. 1

            http://www.win.tue.nl/~aeb/linux/lk/lk-9.html#ss9.6

            That sheds some light on how Linux caters to memory-greedy processes.

            @lorddimwit, is this what you called a hassle? It can certainly be, but it’s not impossibly difficult to enforce a hard limit.

            In practice there seems to be enough crappy apps out there that the overcommit system was developed for good reason, but YMMV as always. I have had to tune this for servers, but not so often and never for a desktop.

            1. 1

              I know, that’s why I parenthetically clarified that really I was bitching about the OOM killer, and not something else. :)

              Linux should, IMHO, simply fail at allocation time if memory is exhausted, returning NULL from malloc. Right now, what happens is that memory allocation essentially never fails and then, at some random point in the future if resources are actually exhausted, a random program is killed.

              (Okay, so it’s not random, it’s the one with the highest OOM score, but still.)

              The problem is that the process that’s killed is decoupled from the most recent allocation. This means that long-running processes with no bugs can just be killed at random times because some other program allocated too much memory. You can fiddle with OOM score weights and stuff, but at the end of the day, the consequences are the same: a random process is going to get killed on memory exhaustion, rather than just have the allocation fail.

              The most logical solution, to me, is simply return NULL on allocation failure and let the program deal with it in a way that makes sense (try again with a smaller allocation, report to the user that memory’s exhausted, whatever). Instead, it’s impossible to detect when a memory allocation from malloc isn’t really going to be available.

              It’s possible to disable the OOM killer (or at least it used to be), but it’s a hassle.

              1. 2

                Okay, I see what you mean.

                I don’t claim to know all the details of OOM killer. But by reading OOM killer’s source code, I understand that it will not randomly kill processes.

                Instead, it seems to calculate an “OOM badness score” mainly based on the total amount of memory used by processes. So any process that has the highest score (i.e. takes the most memory) might be killed first, but not necessarily. It depends on other factors as well.

                In my specific scenario, it killed the right process. But you may be right; there are probably other situations where the wrong process will be killed.

                Have you ever experienced it?

                1. 4

                  Oh yeah, all the time. There was a period of time where the OOM killer was sarcastically called “the Postgres killer”. Because of the way PostgreSQL managed buffers, it would almost always have the highest OOM score. They fixed it by allocating buffers differently, but it sucks when your production DB is randomly killed out from underneath you when it’s doing nothing wrong.

                  Again, you can adjust weights and such so that different processes would be more likely to be selected based on their weighted OOM score, but it’s an imperfect solution.

                  1. 4

                    Or the X killer. If you had 100 windows running, the X server was probably using the most memory. Kernel kills that, and suddenly lots of memory is free…

                    1. 3

                      There was a certain era of the ‘90s where people ran dual-purpose Unix server/workstations, where that might not even have been the wrong choice. If you’ve got an X session running on the same machine that runs the ecommerce database and website, better to take down the X session…

                      1. 3

                        Says the guy who was never running his company’s vital app in an xterm. :)

                2. 1

                  The reason Linux does that is because of an idiom of Unix programming—the program will allocate a huge block of memory but only use a portion of it. Because Unix in general has used paging systems for … oh … 30 years or so, a large block will take up address space but no actual RAM until it’s actually used. Try running this:

                  for (size_t i = 0 ; i < 1000000; i++)
                    void *p = malloc(1024 * 1024);
                  

                  with

                  for (size_t i = 0 ; i < 1000000; i++)
                  {
                    void *p = malloc(1024*1024);
                    memset(p,127,1024*1024);
                  }
                  

                  One will run rather quickly, and one will probably bring your system to a crawl (especially if its a 32bit system).

                  You can change the behavior of Linux (the term you want is “overcommit”) but you might be surprised at what fails when disabled.

                  1. 3

                    I understand the reasoning behind it, and I still think it’s problematic. It’s a distinct issue from leaving pages unmapped until they’re used. I prefer determinism to convenience. :)

                    Many Unicies, both old (e.g. Solaris) and new (e.g. FreeBSD), keep a count of the number of pages of swap that would be needed if everyone suddenly called in their loans, and would return a failed allocation if the number of pages requested would cause the total outstanding page debt to exceed that number. That’s the way I’d prefer it, and it still works well with virtual memory and is performant and all that good stuff. Memory is still left unmapped until it’s touched, just as before. All that’s different is a counter is incremented.

                    The problem of course is that if every last page of memory is used, it wouldn’t be possible to start up a shell and fix things, in theory. Linux “solved” this by killing a random process. Some of the Unices solved it the right way, by keeping a small amount of memory free and exclusively for use by the superuser, so that root could log in and fix things.

                    (Of course that fails if the runaway process is running as root, but that’s a failure of system administration, not memory allocation. ;) )

                    I know that Solaris would continue running with 100% memory utilization and things would fail the right way (that is, by returning an error code/NULL) when called, rather than killing off some random, possibly important, process.

                    EDIT: FreeBSD does support memory overcommit now, too, optionally, enabled via sysctl.

                    1. 3

                      I’m always amazed by the level of expertise and knowledge that people have online.

                      Thanks for sharing your input, lorddimwit! :)