1. 69
  1.  

  2. 10

    Great story, but I have so many questions! Was the second process there for a reason? Perhaps a thin client mode? If so, why not a runtime or compile-time switch to choose between the two rendering modes?

    And why such disdain from the rest of the organization? Was the environment just purely toxic, or is there something we don’t know?

    1. 4

      Was the second process there for a reason?

      I speculate that it was an isolation to keep the computation independent of the display of the rendering.

      And why such disdain from the rest of the organization?

      Here, again, my speculation: the author wasn’t the first to identify this design element as a bottleneck. It was a historical debate – “We must abandon this multi-process design in order to make rendering faster. Marshaling, serializing this data for every stroke is just too slow.” – “No, we need to keep it in order to correctly partition this functionality.” Here’s where the speculation kicks into overdrive: different teams had historically grown bitter at investigating/triaging bugs that ended up originating in the other teams’ code. “It crashed in FooFunc, and that’s your team’s code. Please investigate the bug, if you find that it’s in my team’s code, assign it to us.” – “As we suspected, it’s heap corruption, FooFunc just happened to be a victim. BarFunc did a heap use-after-free.” Using partitioned virtual memory spaces would avoid this conflict.

      1. 2

        Here, again, my speculation: the author wasn’t the first to identify this design element as a bottleneck.

        Author says as much:

        I’ve realized that there were probably a dozen programmers on that ancient project who knew why the system was so slow and how to fix it. They knew, but they kept it to themselves because in that organization, there were some things that were more important than making the system better.

        My take on it was that the other people just wanted to keep their jobs and knew enough about company politics not to take this on as a side project.

        1. 1

          It’s not clear that the data over pipe model needs to be the default and only available implementation, though. Yes, there can be many benefits to segmenting a program so that different phases can eat serialized data. Compilers do this. But it’s not the typical deployment configuration.

      2. 10

        I’m surprised this is voted as highly as it is. Aside from being a 2012 article, it is scant on any interesting details - it reads to me like I pat-myself-on-the-back, I’m smarter than the people around me personal fiction. Even the lesson learned is “I’m smarter than the advice that was actually given to me”

        1. 3

          I upvoated because I found it interesting, and the politics of my-code/your-code are familiar from past jobs I’ve had.

        2. 3

          A lovely piece advice, especially ’cause the value comes from understanding it… and then ignoring it.

          The title makes me think of the best programming advice I ever got, after talking with an extremely talented and eclectic programmer about why he was writing such-and-so thing in C when he could be using Lisp or OCaml or Haskell or whatever. And he just shrugged and said “Use the right tool for the right job.”

          1. 2

            I don’t understand why this is good advice. Yes, I see that overstepping boundaries is unwise from a social perspective. Pissing off coworkers is not enjoyable, or conducive to a happy work environment. From a technical perspective this improved the product. Shouldn’t this make it more valuable to customers, more competitive in the marketplace?

            At the very least mucking about in other people’s code helps one learn.

            1. 14

              I think his point was that the advice if taken literally was bad, but the lesson he learned was good.

              1. 5

                So the title is a bit off, it should say “the best lesson I ever learnt”

                1. 1

                  This is a good summary, but I can’t begrudge the author his clickbait title. I thought it gave the telling a nice twist.

              2. 7

                I think the advice he took away was to not follow his boss’ advice. Both in terms of looking into and pointing out issues in “other people’s” code, but also not being defensive not taking things personally when people point out issues with your code.

                1. 1

                  Yea it seems like terrible advice to me. He even says as much, but the takeaway I got was: that company you worked for is shit and you need to find a new place ASAP.

                  I’ve worked at a lot of different shops and code-review with unit tests are pretty essential in building stuff you can maintain. At one shop we had one unit test with like 2000 lines in it for several different rules and when I got some time free I spilt all of them up and submitted a merge request. I got huge “thank god, somebody needed to do that” from several people in the next morning’s scrum and I was really glad it got merged in quickly so I didn’t have to go through rebase hell (although I’m sure other people had to).

                  There were a few people who were really aggressively defensive and tied to their code which made them difficult to work with. (One got the chopping block in a layoff but the other didn’t).

                  The majority of the good shops wouldn’t have chastised him for this (most good shops would have probably fixed it before he got there, or at least told him that it was a known problem they were working on). I’ve been at a few shit shop and they would have reacted this way.

                2. 2

                  A multi-process model works just fine if you use the right IPC primitives. I wonder if those were available for the author at the time.

                  1. 1

                    What are the right IPC primitives?

                    1. 1

                      I think a named pipe with shared memory would be very fast. (http://anil.recoil.org/papers/drafts/2012-usenix-ipc-draft1.pdf)

                      Just a guess though based on the fact that Chrome uses multiple processes for each tab and it doesn’t seem to hinder it.

                      Now thay I think about it, it’s likely the serialization/deserialization which would be particularly painful. But with shared memory you might be able to avoid that. (your message would be a short “render this memory location”)

                      This approach would also require careful synchronization… you wouldn’t want the first process to modify an object the second process was rendering.

                  2. 2

                    Great advice. If I had to generalize, I’d say this: good programmers focus on the people. Bad programmers focus on the technology. This guy had a people-problem, the dang app was too slow. Everybody in the org had that problem. But because they had split work up based on technology, nobody wanted to go into somebody else’s code and fix things. Everybody cared about their thing. Nobody cared about things in general. Thanks!

                    1. 5

                      In that case, I’m quite excited to hire some bad programmers in future.

                      1. 4

                        I think the (admittedly hyperbolic) question is this: do you want a supremely-able technical coder who isn’t able to intuit what the users want? Or do you want a programmer who guesses what the users want before they even know it and has problems figuring out how to make that happen?

                        1. 2

                          I’d take the programmer that intuits that users want graphics that are 100 times faster, and is able to deliver that. It’s rather hard to find someone so bad with people that they can’t figure out what would make a program beter.

                          1. 3

                            It’s not hard at all; I’d argue that is in fact the typical case, sadly.

                          2. 1

                            Maybe I misinterpreted the article, but I didn’t think that was the dichotomy. As I understood, the author was technically able and made a significant performance improvement. I don’t think any user is going to want their software to ever perform worse.

                            As I understood, the author’s improvement was shunned for ego and “office politics” reasons. I don’t think he had a “people-problem”. Quite the opposite.

                            I’d love to build a company where office politics just aren’t a thing.

                            1. 7

                              I’d love to build a company where office politics just aren’t a thing.

                              Yeah, but who wants to work solo?

                            2. 1

                              Ideally, you have both of them & they work together.

                        2. 1

                          Just do it. And if you get fired for making things better then you know you were working at a shitty place that doesn’t give a damn about quality. I know this is easier said than done but if your career and the products you work on are helt back by stupid politics the it is time to move on.

                          1. 2

                            The problem with “just do it” is that it’s easy for you to submit a patch that adds a specific bit of functionality that makes your life easier. But if everybody did that, it’d be design by committee. It would result in a total mess. I’ve worked on platform teams, and one of the skills I learned on my first gig on a platform team was learning how to say no and reject patches in a respectful fashion.

                            1. 1

                              This patch did not make ‘your life easier’ - instead it fixed the product from being unusable.

                              I am not suggesting to randomly land patches. I am suggesting to never stop doing innovative work or write proof of concepts that show how to improve things. If that kind of work is not acceptable then GTFO ASAP?

                          2. 1

                            This was an interesting and brief read; the last line is also what I want to focus on:

                            But the best way to have a future is to be part of a team that values progress over politics, ideas over territory and initiative over decorum.

                            This is a good example of how it is primarily Free Software that will lead to good software, in the general cases. I’m not claiming Free Software does it often or that proprietary software hasn’t reached very high qualities necessary for critical purposes, but it is Free Software that has this fascination with hacking.

                            You will only have a good program if you’re willing to start for a good program itself and hack at it until it’s further improved and you won’t usually get that with proprietary software.

                            1. 1

                              I think that the fundamental issue is one of incentive alignment; hacking the quality of your proprietary software up is orthogonal to the needs of the business, and where there is conflict, the hacking must lose to the business. Free software, or more generally, software untethered to business directives, won’t face this problem. Whether or not that means “better” software of course is determined by which axis you’re using to measure.

                              1. 1

                                are you claiming free software is generally untethered to business objectives?

                                1. 2

                                  No, I’m saying that they’re orthogonal concerns.