1. 13
  1.  

  2. 9

    One easy way to do this from the not-as-old days is to develop or at least test software on crappy hardware. I’m talking Pentium 2’s and stuff. Maybe a cross-platform app running on low-end phones. The reason is we used to run all kinds of software pretty fast on those. Client server apps could be optimized for decent performance on 28Kbps lines. If your app can do that, then it can use cloud or mobile resources well. The thing to watch for is where there’s specific enhancements in the cloud platform or mobile SOC that speed up the app versus vanilla code. So, start with vanilla on low-end hardware then port to that.

    1. 3

      Working in constrained environments really does make you more aware of your surroundings. If your system doesn’t even have a stack, say, you have to think carefully about what it is you actually want to do. As strange as it sounds, it’s quite liberating.

      1. 3

        A similar effect I’ve noticed in working on React apps is, because the framework has significantly worse performance in development mode than in release mode (lots and LOTS of assertions AFAIK), if you write stuff that runs acceptably fast in development, you’ll often find that it runs pleasingly smoothly in production mode.

        1. 1

          That’s interesting. It might also be true in native code running sanitizers or memory safety. Might get similar benefit keeping those on during development on a normal box.

        2. 3

          Yes! When developing Android apps, I insist on testing for basic functionality on the oldest working hardware I have (currently a Nexus One) and doing all testing on a WiFi network I have configured to bottleneck at 100kB/s and sustain 5% packet loss.

          It is astonishing how many apps from billion dollar companies totally fall apart under such circumstances.

        3. 7

          To me, the summary of this is, “think ahead and conserve your resources”.

          As I work and have worked with some amazingly good old-time programmers, I’m sympathetic to the arguments here because I think they are by-and-large true, but I don’t like the way the details are put forth. The tone is that of “the old days were better”, and that is resolutely false. (If you’ve worked on a mainframe, you’ll probably agree.)

          That [getting code working the first time] all goes out the window if you spend too much time debugging or you accept that debugging is an inevitable part of the programming process.

          Get it right the first time? Absolutely try for it. But also realize that mistakes are inevitable. Build for debugging because you’re going to have to debug it at some point. Debugging is an inevitable part of the programming practice. The author is flat out wrong.

          I think what the author is referring to is the act of trying something out on a live system and seeing if it works. So if you rely on debugging late in the development process (like, at deployment) then you’re being “unforgivably sloppy.” The author is also careful to point out that he’s not against testing, which confuses me given that he seems to be so much against debugging.

          1. 4

            trying something out on a live system and seeing if it works

            Usually this is one of the most expensive ways to iterate. It has it’s place but I try to make most things testable/provable via the type system and unit tests.

            But sometimes you do have to try things on a live system. Debugging tools are invaluable. It’s odd because the mainframe people I know lament the lack of job introspection tools in UNIX/Linux.

          2. 2

            4. It’s not about refactoring: Optimize up front

            Hmmm… I guess everything moves in cycles. [0]

            [0] http://wiki.c2.com/?PrematureOptimization