1. 47
  1.  

  2. 21

    One point I’ve painfully relearned recently is: it’s not year 2000 anymore. There’s time and place to be minimalistic, there are also reasons why you’d optimise for performance.

    But if you’re writing a dedicated app for a business use that shows one windows with results from a database and sends a periodic email, it’s perfectly ok to start with heavy dependencies and ship a 100MB binary which includes .net5. Nobody cares and if including the whole EF ORM makes testing easier, do it, even for all the 4 queries you make. Sometimes handcrafting this stuff is just not worth the time.

    In many cases starting with absurdly big tooling/framework/libraries for a tiny project is a good trade-off vs extra hours of solving common problems.

    Separate point: Learn regex. It will save you lots of time otherwise taken by manual rewriting where basic search&replace are not enough.

    1. 6

      An alternative (albeit somewhat more provocative) restatement of this that I’ve recently heard goes somewhat like this: it’s a bad idea to write software that runs on tomorrow’s computers, but it’s probably not a good idea to write it for twenty year-old computers, either.

      It’s important to get your bearings again every once in a while, I guess. A few years ago I realised my eyeballing gauges were definitely out of date, and shaped by the years when I’d honed these skills, a time when 20 MB was pretty big, 640 MB was humongous and 2 GB was bigger than most consumer hard-drives. Wasting resources is a timeless non-skill, but the progress of technology, both hardware and software, means that not all 600 MB programs are “bloated” and “waste resources”.

      1. 2

        it’s a bad idea to write software that runs on tomorrow’s computers,

        It’s a great idea, in the right circumstances — “skating to where the puck is going” in Jobs(?)’ phrase. Xerox PARC was very explicitly building software for future computers — In 1970 Alan Kay even made a cardboard mock-up of one, the “Dynabook”, that looked very much like an iPad with a physical keyboard. He called the Alto an “interim Dynabook”, and Smalltalk aimed at being the Dynabook OS. Smalltalk-72 was almost unusably slow, but mostly because the 1972 hardware couldn’t handle the needs of such a dynamic language.

        but it’s probably not a good idea to write it for twenty year-old computers, either.

        Unless you’re forced to for backward compatibility! The project I work on still has to build 32-bit and support iOS 10, because some important customer of ours deploys apps on ancient iPhones. 😫

        1. 1

          It’s a great idea, in the right circumstances

          Obviously, I only meant that performance-wise :-). If there’s a clear path between the software you write today and the computers that will run it tomorrow, with no computers that will run it poorly and inefficiently in-between, then by all means it’s a good idea, since you’re writing for the computers that will actually run them :-P.

          The same goes for backwards compatibility. If you’re writing software that has to run on twenty year-old computers, then it’s not just a good idea, it’s arguably the spec :-D. The bad version of this idea is to optimize software that mostly sits idle until it runs really well on a Pentium III-era machine, and then proceed to never deploy it on anything other than cloud instances with gigabytes of RAM.

          1. 1

            It sometimes even makes sense performance-wise. For example, if you’re developing a PC game. If you aim for best graphics on the current hardware, but your development takes 3-5 years, then by the time you finish you’re going to have mediocre graphics.

            1. 2

              On the other hand, it will now run well on older desktops, laptops and probably be portable to last-generation consoles and mobile devices very easily, so you may end up with a much larger market than if you really stretched the capabilities of a modern desktop GPU.

              Upgrade cycles have slowed down a lot recently. It used to be that a 3-year upgrade cycle was normal, now 5 is more common for corporate environments and 7+ for home users. Mobile phone upgrade cycles are shorter than PCs, but those are also lengthening and there’s a thriving second-hand market for phones from 2-3 generations ago.

              My personal machine is a 2013 MacBook Pro and it remains very capable (to the extent that we bought another one second hand last year for my partner to use). It’s got a Haswell quad-core CPU, a 1 TiB SSD and a GPU that’s pretty anaemic by modern standards, but building LLVM is about the only thing I do with it where it struggles. I replaced the battery a few months ago (thanks to the person who came up with the nylon-wire trick for cutting through the glue so that you can pull out the old battery without having to remove every single component from the case and apply solvents!) and I’ll probably keep using it until it dies.

              1. 2

                It’s not that simple, and not that beneficial. It’s easier and cheaper to scale graphics quality down than up. If you aim too high, you can reduce LoDs, downsample textures, disable expensive effects, etc. on older hardware. If you aim too low, you’re going to need to rework assets.

                To be more specific, if at start of the project you aim for today’s (low-end, mid-end, hi-end) range of hardware (depending on game’s quality settings), then by the release you’re going to support (garbage nobody has any more, low-end, mid-end), with no juicy screenshots to sell the game with. And in couple of years it’s going to support (garbage, garbage, low-end) and end up in the bargain bin.

                OTOH if you start with (mid-end, hi-end, overkill), you’ll release for (low-end, mid-end, hi-end), and have some longevity for (garbage, low-end, mid-end) down the line.

              2. 1

                Well… yes, if your development takes 3-5 years, there will be no computers that will run it poorly and inefficiently in-between.

            2. 1

              In 1970 Alan Kay even made a cardboard mock-up of one, the “Dynabook”, that looked very much like an iPad with a physical keyboard.

              For some reason I always thought Kay made a prototype Dynabook. I didn’t know it was just a mockup!

              1. 1

                PARC did build their own computers, the Altos, to run (among other things) prototype Dynabook software. But at that time the computer was the size of a dishwasher, with removable disk cartridges the size of pizza boxes (that held something like 5MB) and of course the display was a fat CRT. A computer anywhere near the size of a notebook was science fiction in the early 70s!

                There’s a good picture here: https://www.si.edu/object/xerox-alto-central-processing-unit%3Anmah_334631#

                And here’s the cardboard mock-up: http://entangled.systems/fragments/20160806-mockup-of-the-dynabook-conceived-by-xerox-parc-s-alan-kay-1970s-source.html

        2. 9

          However, the core of software development is reading, navigating, understanding, and at the end of the day writing code.

          Everything the author is doing makes sense to me, but I find this assertion interesting. In my experience, the core of (high-level) software development is thinking more than any activity involving the actual code. Solving the problem is the difficult/time consuming bit, while the actual implementation of the solution is usually more routine. I’ve seen developers often fall into the trap of starting to write code before they understand what they’re actually trying to implement - I like to remind them that their value is in solving problems through code not writing code itself.

          Implementation is of course is an important part of being able to solve those problems. Necessary, but not sufficient perhaps is a good way to phrase it.

          I don’t think this really takes away from the usefulness of the tools and activities presented, more of a philosophical thought.

          1. 1

            Have you seen Bret Victor’s “Inventing on Principle” talk, by chance?

            <enso-fanboi mode="on" disclaimer="&and-soon-to-be-employed;" />
            Have you seen Enso (neé Luna)?

            1. 1

              I haven’t, but it seems interesting. Will check it out.

          2. 8

            FWIW I gave up on this post because it displays nothing in my browser without JS …

            1. 1

              Yeah I couldn’t grab it into Wallabag.

              1. 1

                I have JS enabled and it’s still a black screen for me…

              2. 6

                Almost as vital as jump-to-definition is its opposite, show-all-callers. (Which is recursive, manifesting as a tree, at least in Xcode.) I use this all the time to understand the context in which a function is used, to see if it’s obsolete, to figure out how the hell the flow of control got to point B from point A…

                Granted, the utility of this goes down as the language gets more dynamic. Even in rigid ol’ C++, Xcode misses some calls that happen via template functions like make_unique.

                I’m also a big fan of putting section markers in source files (#pragma mark -). This makes it a lot easier to navigate via the list-of-functions pop-up or minimap.

                I also keep source files short. I get uncomfortable when one reaches 500 lines and start thinking how to refactor.

                1. 5

                  Site is unusable without Javascript, unfortunately

                  1. 4

                    I’ve been enjoying all the discussion recently about methods people are using to be faster at programming, so I decided I’d contribute my a list of my own.

                    I’d be eager to hear what everyone else does that I haven’t listed!

                    One area I know I’m quite lacking is in using good keyboard shortcuts and keybaord-based navigation as a whole. I still do a good amount of clicking around the editor to select stuff and move the cursor, and I’ve seen some vim users who take pride in never taking their hands off the keyboard.

                    1. 3

                      Honest question: Is it really the tooling that does it…? Or is it just icing on the cake? Compared to knowledge of algorithms, choosing the right language for the job, and stuff like that.

                      1. 1

                        Having to pick up the mouse from time to time is an annoying distraction though. When I moved from Vim to VSCode, not learning the keyboard shortcuts feels quite destructive in the tight modify-compile-run loop. Having learned how to switch between editor window and terminal helps a lot.

                        1. 1

                          In case you haven’t seen it yet, vim extension for vscode is really good. Less relearning that way.

                      2. 3

                        Long ago Bruce Tognazzini did some experiments showing that choosing a command from a GUI menu was measurably faster than pressing the keyboard shortcut, even though the people doing it felt that the keyboard was faster. (The exceptions were super common shortcuts like Copy and Paste that were burned into muscle memory.) He hypothesized that subjective experience of time was different when performing a physical task like moving a mouse, than for a memory based task like remembering a key command.

                        I’m sure an expert can get the full range of keyboard navigation commands burned into muscle memory to where it’s faster than mousing, but personally, I’m waiting for eye-tracking interfaces that can move the cursor to where I’m looking.

                        1. 12

                          Dan Luu did a rebuttal to Bruce’s claims: https://danluu.com/keyboard-v-mouse/

                          1. 3

                            The fundamental flaw with mousing is that you have to look (at things) to use it.

                            The operational cycle is: 1) actuate muscle motor skills, 2) watch mouse pointer movement, and other on-screen UI changes, 3) loop (1 -> 2) until target is reached, 4) complete task (click, hover, release drag)

                            With keyboard, assuming you are zeroed in on your home row position, you can execute the vast majority of your tasks without looking. The cycle is: 1) actuate muscles, 2) complete task

                            1. 3

                              Exactly. The point is not to minimize the amount of milliseconds that the operation takes, but to minimize the amount of distraction or context switching. Ideally one can edit the code without paying special attention to the editing process itself, so your mind can devote all space to solving the actual problem you are taking on.

                              I’d say this goal is easier to reach with keyboard shortcuts than with the mouse. But perhaps one can do this with the mouse too after enough training.

                            2. 1

                              Interesting finding! In my case, I opened a really old delphi project recently, I think I hadn’t had delphi installed in years, but the shortcuts immediately were in my muscle memory. What I used often stayed there - just like riding the bike. There are so many commands today that nearly impossible to memorize ’em all, but what you use the most frequently worth the effort to memorize (which is like using it 6-8 times to learn?) - those will actually speed up your daily routine.

                            3. 1

                              I have this in my .spacemacs:

                                ;; Don't use the mouse in emacs. It's just annoying.
                                (load-file "~/.emacs.d/private/disable-mouse.el")
                              

                              https://github.com/purcell/disable-mouse/blob/master/disable-mouse.el

                            4. 2

                              Is Github Copilot worth using? I got into beta recently, but since the hype died down and I’m an Emacs user I haven’t bothered to try it.

                              1. 2

                                Perhaps the most important skill I’ve learned over the years is asking the right questions. Who are the users of the software you’re building? What value will this software have to them? (This is theoretically the point of the user story, although they’re often written pro format, e.g., “As a user, I need a widget so I can use it.”) The better your questions, the clearer everyone will be about what the core features of the software should be, what is reasonable to expect to have to add in the near future, and what is just gold plating. When you don’t get clear or consistent answers to these questions, it probably means it was a spur-of-the-moment idea, the value of which hasn’t actually been thought out. These are unfortunately common wastes of everyone’s time and money.

                                It takes some practice to diplomatically convey the opportunity cost of these traps. If you’re not talking directly to roadmap decision makers, watch them. If they’re consistently at loggerheads with each other over what the right software is, you can probably expect to build the wrong software, assuming you’re able to finish building it at all. On the other hand, when you build a relationship of mutual trust and respect with decision makers, they will come to you their problems and ask you the right questions to define a solution and estimate its cost so they can prioritize it correctly.

                                1. 1

                                  A terminal emulator that supports things like search, scrollback, multiple tabs without screen or tmux.

                                  These sound like anti patterns to me. I do have these things on my terminals and I never found them useful as I have grep, less etc at the tip of my fingers. Littering the terminal with megabytes of output per command the relying on the terminal emulator to find what one need is an uggly and very rarely justified/necessary workflow. Even tabs is to a great extent a matter of preference, as the functionally is there in your desktop environment to open multiple terminals at will. I do think modern windows and OSX became horrible unergonimic when it comes to window management, making people turning to in-app tabs.

                                  1. 1

                                    They’re not working so well. Just an empty page, and a bunch of third-party JS rubbish.