1.  

    The biggest piece is minimizing your dependencies, and limiting them to ones that value backwards compatibility.

    This seems like an oversimplification. If you find yourself reinventing a rather complicated wheel, ask yourself if you would rather maintain your own reinvented wheel or risk being stuck with an old version of someone else’s. For professional projects, consider it is not only you who will have to live with this decision.

    1.  

      Maintenance costs are remarkably low when upstream can’t break things due to CADT.

      1.  

        I’ve been working on a 14-year-old ruby-on-rails project for the past three years.

        As far as dependencies go, I’ve learned to actively avoid dependencies unless they meet one of the following:

        • The core team of maintainers (who are not all employed at the same company) have clear process in place around release, compatibility, etc
        • The sole maintainer has a great track record of backwards compatibility
        • The project is unmaintained but is uses simple enough code that I can keep it updated

        Nearly every dependency in this app which does not meet one of those criteria has created more work in the past three years than simply writing the dependency from scratch would have.

        1.  

          I don’t see how that’s the case when you can just fork them if you plan on doing your own maintenance.

          1.  

            then you still have the “discoverability problem” that when someone finds a security problem it will be scripted asap. If you have written your own, the attackers might never find it, unless you’re directly targeted or it’s easily wormable due to having something in common.

            I have no numbers but I’m pretty sure there’s a lot of XSS still running around in pages that have been online for 20 years where people use echo $_POST['name'], but nobody never stumbled over them, versus an old version of some, say, forum software with hundreds or thousands of installations.

        2.  

          For professional projects, consider it is not only you who will have to live with this decision.

          Except that this article is explicitly about personal projects, mentioned at the start and the end.

          To quote Chelsea Troy, before giving critique:

          QUESTION #1: Is this piece for me?

          Consider that the article has clearly laid out its scope and context.

          Granted, careful choice of which wheels get reinvented is important, but these days, I do think the pendulum has mostly swung the way of using existing software in one form or another.

        1. 1

          Awesome level of detail in this article. Something to file under cantunsee. Now I wish that color blending behavior was baked into the default CSS shadow behavior.

          1. 14

            The choice is between an Electron app or no macOS app.

            Really? Were there no other cross-platform GUI toolkits you could have used?

            1. 15

              I wrote and maintain an Electron application. If I weren’t using Electron, if the application would exist at all, it would be Linux-only. There’s a very simple reason why: we use interactive SVGs a lot throughout the application, because we need an image that resembles a keyboard, and SVGs make that easy. GTK can’t do that, QT can’t do that. They can display SVGs, sure, but they don’t support interactive SVGs, only via a webview. Since about 90% of the stuff the app does is tied to interacting with that SVG, there’s no point in using a native toolkit, when most of it will be spent in the same WebKit anyway.

              We could, of course, build our own widgets instead of using SVGs, but that’s way more time investment than we can afford. We - well, I - explored the native options. None of them would’ve allowed me to build the app as it is today. Electron did.

              Sometimes there are good reasons for using Electron.

              1. 1

                not svgs, but the tk vector canvas is surprisingly capable and nice to use.

              2. 12

                Honestly, I have a dim view of cross-platform toolkits having used them and developed on them. UX-wise, they’re about as bad mouthfeel-wise as Electron. I had to do quite a bit of work to get Qt to have not awful mouthfeel on Mac OS. I find most of the advocates of cross-platform UI toolkits tend to be on Linux, which was historically a pretty low bar UX-wise and clamouring for any application. You’d get better performance, but it’s not a strong argument from the UX side.

                1. 6

                  Honestly, in my experience they’re worse than Electron apps. At least standard text input shortcuts work in Electron! The “cross-platform” toolkits tend to look non-native, just like an Electron app (although often more dated), and have very… weird shortcut support, e.g. macOS Emacs-style keyboard input.

                2. 7

                  What would you recommend? I don’t mean this adversarially, it’s just that every time I’ve looked for a good cross-platform GUI toolkit, I’ve come back disappointed. I hate working with Qt because Qt bindings vary in quality across languages and I’d rather not use Qt’s stdlib over the C++ stdlib when writing C++ because I have much more experience with the C++ stdlib. Gtk similarly has some pretty poor bindings in certain languages. Tk is probably the toolkit I’ve enjoyed using the most, but it’s rapidly losing mindshare from what I can tell.

                  1. 2

                    I agree with you that the state of cross-platform GUI toolkits is bad. I love GTK on Linux, and as far as I can tell, its language bindings are consistently good, even when they’re third-party. But GTK support on Windows is second-class, and on MacOS has historically been terrible (but is maybe getting better?).

                    When I was looking at a toolkit to use for a cross-platform graphical Common Lisp app, the best I could find was Tk, despite its limitations.

                  2. 4

                    I think so? There are options you can use, but there are no really good options. https://blog.royalsloth.eu/posts/sad-state-of-cross-platform-gui-frameworks/ is a nice survey.

                    1. 2

                      They missed some third-party XAML implementations like https://avaloniaui.net/ It’s going to be closer to javafx, but with a great RAD environment (visual studio)

                      I hope MAUI will get a community Linux back-end. That would make it a good alternative too.

                    2. 4

                      i mean you have to buy into the react paradigm but react native can compile to windows and mac in addition to ios and android

                      https://microsoft.github.io/react-native-windows/

                      1. 2

                        I guess people really haven’t tried how fast you can get stuff running with QT. Yeah it’s not completely native (and it can also be used the electron way since some version), but that’s not something you get with electron either. To be fair, you have to use c++ (with QT additions) or python for it..

                        1. 2

                          I inherited a Qt project once. It was awful. I’ve never used Electron, but I know enough about it to pick it over Qt in most circumstances.

                          Not sure there are many other options if you’re targeting desktop. Proton looks promising. Microsoft’s React Native for Windows and Mac does as well. Both are similar concepts with ostensibly less overhead than Electron. Anyone here try those?

                        1. 16

                          This doesn’t really touch on the economic factors for why Electron is a thing; it’s easy to find JS devs off the street for cheap, not so much for Win32 or Cocoa. Or for that matter, finding them at all. (edit: Or hiring both at the same time. Why bother when you can (seemingly) do the same with one?)

                          1. 3

                            Yeah it would have been a much better article if he touched on the hire-ability aspect. OP is clearly a product guy, it seems like he has no clue there are a lot of good cross-platform frameworks out there these days.

                            1. 2

                              [citation required].

                            2. 2

                              Not sure where you got your data for JS dev pay. That might have been true a decade ago, but the pay gap has shrunk, especially for experienced devs.

                              You’re about one thing: targeting multiple platforms natively isn’t economical, especially nowadays with mobile platforms.

                            1. 1

                              Javascript has had map and filter for a long time — why do they no longer work?

                              1. 5

                                JavaScript has map, filter and reduce methods on Array. However, JavaScript allows you to define your own iterable types, and it allows you to define your own generator functions. You can’t currently map, filter or reduce over anything other than arrays, without writing your own generic map/filter/reduce functions. It would probably be nice to have generic map/filter/reduce functions in the standard library.

                                1. 1

                                  Wow, that sounds… well, I can’t fathom what would even be the point in having iterators if you can’t even… well, iterate over them.

                                  1. 4

                                    Well, you can iterate over them. But currently only via the for (<variable> of <iterable>) loop, no cool functional stuff.

                                    1. 1

                                      One of the classic examples of a JavaScript iterable is one returned by a Fibonacci generator function. As handy as map/filter/reduce methods are, TC39 can be forgiven for omitting them given that iterables like these are infinite.

                                      1. 3

                                        Generators are common in a lot of other languages and you can still iterate over them, at your own peril, or slice them to a suitable length. Why release something so half-cooked as this?

                                      2. 1

                                        You can, but as @mort said you have to use for of instead of methods. Which seems reasonable at first glance, given that to do otherwise with user-specified iterators would be akin to magically adding methods to your object. What’s more egregious though is the fact that nobody bothered to add those functions we’re used to from Array to either Set or Map, and that there’s basically nothing at all added for async interation anywhere except for await of which is as confusing and rarely used as you’re imagining.

                                    1. 1

                                      Well spotted!

                                    1. 8

                                      This isn’t a Typescript thing, it’s in regular JavaScript https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Optional_chaining (for those who only read the title)

                                      1. 5

                                        I’m also a little surprised at the popularity of this article. Not only is this not really a TypeScript feature per se, but TypeScript support for it has been out for almost two years now. Anyone writing TypeScript in VS Code has probably seen it auto-insert option chaining into statements with nullish properties. This auto-insertion is actually frustrating for those of us free from the shackles of IE 11 and Babel but still stuck with Webpack 4 since the parser it uses doesn’t support this or other recent syntax innovations.

                                        1. 1

                                          How has it come about that you are able to use modern tsc while simultaneously being stuck on old webpack?

                                          1. 2

                                            Stuck on Vue 2 like much of the Vue ecosystem, at least at the time we started our project.

                                          2. 1

                                            I used it as a jumping-off point to talk about optional chaining in general TBH

                                        1. 3

                                          @jfmengels wrote an excellent article last week about incorporating this concept into elm-review.

                                          1. 10

                                            I hate how Go’s and Deno’s approach to dependencies of just pasting the URL to the web front-end of the git hosting service used by the library seems to be taking off. I think it’s extremely useful to maintain a distinction between the logical identifier for a library and the physical host you talk to over the network to download it.

                                            1. 4

                                              I like the idea of using URL fragments for importing. There’s a beautiful simplicity and universality to it. You don’t need a separate distributed package system—any remote VCS or file system protocol can work. However, it needs to be combined with import maps, so that you can hoist the location and version info out of the code, when desired. And there should be support/tools for explicitly downloading dependencies to a local cache, and for enforcing offline running. This is the approach I plan to take for Dawn.

                                              1. 2

                                                This strikes me as problematic as well. LibMan in .NET is the same way. npm audit may be flawed, but npm itself at least provides a mechanism for evaluating common dependency chains for vulnerabilities.

                                                Ryan Dahl and and Kit Kelly drew the opposite conclusion in their work on Deno. They believe that a central authority for package identity creates a false sense of security and that washing their hands of package identity altogether is the solution. Deno does at least have a registry for third party modules of sorts, but installation is still URL based.

                                                1. 1

                                                  Think of it like this. The URI is just a slightly longer-than-usual package name. As a handy side-effect, you can also fetch the actual code from it. There’s nothing stopping you from having your build tools fetch the same package from a different host (say, an internal package cache) using that URI as the lookup key.

                                                  The big benefit is that instead of having to rely on a single source of truth like the npm repository, the dependency ecosystem is distributed by default. Instead of needing to jump through hoops to set up a private package repository it’s just… the git server you already have. Easy.

                                                  1. 4

                                                    The problem is that it’s precisely not just a slightly longer than usual package name. It’s a package name which refers to working web infrastructure. If you ever decide to move your code to another git host, every single source file has to be updated.

                                                    I have nothing against the idea of using VCS for distribution (or, well, I do have concerns there but it’s not the main point). But there has to be a mapping from logical package name to physical package location. I want my source code to refer to a logical package, and then some file (package.toml?) to map the logical names to git URIs or whatever.

                                                    I don’t want to have to change every single source file in a project to use a drop-in replacement library (as happened with the satori/go.uuid -> gofrs/uuid thing in the Go world), or to use a local clone of the library, or to move. Library to another git host.

                                                    1. 1

                                                      It’s a package name which refers to working web infrastructure.

                                                      But that’s true about more classical packaging systems, like Cargo. If crates.io goes down, all dependency specifications become a pumpkin.

                                                      It seems to me that deno’s scheme allows to have roughly the same semantics as cargo. You don’t have to point urls directly at repos, you can point them at some kind of immutable register. If you want to, I think you can restrict, transitively, all the deps to go only via such a registry. So, deno allows, but does not prescribe, a specific registry.

                                                      To me, it seems not a technical question of what is possible, but rather a social question of what such a distributed ecosystem would look like in practice.

                                                      1. 1

                                                        If you want to complain that rust is too dependent on crates.io then I agree of course. But nothing about a Rust package name indicates anything about crates.io; you’re not importing URLs, you’re importing modules. Those modules can be on your own filesystem, or you can let Cargo download them from crates.io for you.

                                                        If your import statement is the URL to “some kind of immutable register” then your source code still contains URLs to working web infrastructure. It literally doesn’t fix anything.

                                                      2. 0

                                                        Well, Go has hard-coded mappings for common code hosting services, but as a package author you can map logical module name to repository location using HTML meta tags. Regarding forks, you can’t keep the same logical name without potentially breaking backwards compatibility.

                                                        1. 3

                                                          The HTML meta tag solution is so ridiculous. It doesn’t actually fix the issue. There’s a hard dependency in the source code on actual working web infrastructure, be it a web front-end for your actual repo or an HTML page with a redirect. It solves absolutely none of the issues I have with Go’s module system.

                                                  1. 1

                                                    Looks interesting. JavaScript isn’t a particularly great language for this, but I definitely love the general idea. I want/hope to build something like this in Dawn, once it’s a bit further along. Eventually, I want to be able to compile the source code down to FPGA or DSP code, though that’s even farther off.

                                                    1. 1

                                                      If by “JavaScript isn’t a particularly great language for this” you mean it isn’t sufficiently performant, the author addresses this on the website:

                                                      Under the hood, Elementary is composed of a wide array of highly optimized, native audio processing blocks. On top, Elementary is built on Node.js, a technology proven across multiple domains for high performance applications.

                                                      I take this to mean that the processing blocks are compiled binaries originally written in some other language like C. JavaScript is just used for the API. It’s a pity the Github repo doesn’t include any actual source code, though.

                                                      1. 1

                                                        No, I assumed the actual DSP code was in C++. I just meant that JavaScript isn’t the best language for writing pure functional code. Too much syntax, and a bit awkward.

                                                    1. 7

                                                      Typescript has been a massive improvement in my JS-related.adjacent work: front end, Deno, React Native tsx, etc. It’s really incredible how much more productive I feel when working on my RN app with great typing across the whole app. The work by the TS team is incredible.

                                                      That said, does anyone else feel like TS is now a solved problem and the new versions are chipping away at the edges? I haven’t seen a feature in quite a while that has made me say “Oh man, finally, that’s great”.

                                                      Performance improvements are always welcome, of course.

                                                      1. 6

                                                        For me, the aliased conditions improvement is a “oh man, finally, that’s great”.

                                                        In terms of the JS ecosystem as a whole, I’m eagerly awaiting Record and Tuple immutable primitives and standardization of decorators.

                                                        1. 1

                                                          Very interesting, thanks for the link!

                                                        2. 5

                                                          I agree on the whole. The one big feature I wish Typescript had is pattern matching along the lines of any other strongly typed functional language, but I imagine its authors will want to wait for the corresponding TC39 proposal to reach some higher stage before designing its TypeScript implementation, assuming it doesn’t stall.

                                                          I also find it difficult to fathom the difference between what the compiler claims can be “trivially” inferred and what cannot be inferred at all. On the one hand, control flow analysis of aliased conditions is a feature that leads me to suspect there are many similar inferences to chip away at. On the other, the language server is already pretty slow and I have to imagine additional inferences come with a performance cost.

                                                          1. 2

                                                            Pattern matching would be great! But yeah, it seems like something to put into JS/ES and then added to TS.

                                                          2. 3

                                                            I’ll be very happy if they keep chipping away at dependent types. For instance, “correlated record types”.

                                                            1. 2

                                                              I feel like I’ve hit that issue without realizing that’s what I wanted to do. Thanks for the link.

                                                          1. 3

                                                            I’m not what you’d call a Mac fan - I haven’t been a regular Mac user since ~ 2007 (Xeon based Mac Pro, and boy, was that a machine). But I still do use them from time to time (i.e. when clients issue them to me). So I’ve noticed some of the UX consequences of unification between macOS and iOS. And TBH they haven’t impressed me one jot.

                                                            Take one small example: the Bluetooth menu. Used to be that I could just click anywhere in the enabled line to toggle Bluetooth. Now I have to mouse (well, trackpad, since we’re verbing things) over to precisely where the little slider-toggle is, and click that. This is a meaningful regression in usability, apparently solely so that the UI looks and feels more like iOS.

                                                            I’d love to know what the driving factors are here. Because it feels like usability is being chucked under the bus for something. But it’s not clear to me what that might be.

                                                            1. 3

                                                              My more designer-y friends (and Marco Arment) angrily blame Alan Dye for many seemingly nonsense and objectively terrible UI/UX decisions.

                                                              Seems like a senior designer is trying to make a name for themselves. It’s the only reason that makes sense to me.

                                                              1. 13

                                                                I know this marks me as a grumpy old man, but I miss the days of the Palm, Microsoft and Apple UX guidelines - for the most part, based on empirical user research and user-centric design.

                                                                Some days it seems like the majority of UX research these days is to manipulate users, not empower them - A/B testing of dark patterns in conversion flows, for example.

                                                                Maybe that’s me just being grumpy and old though ;) Or perhaps it’s a reflection of the fact that so often nowadays the user isn’t the customer.

                                                                1. 7

                                                                  I get that people coming at user interfaces from a computer science perspective want to remove subjectivity from design decisions through science. But there are a few problems with empirical user research as the basis for design guidelines:

                                                                  1. The research does not meet scientific standards. Take a look at the work Nielsen Norman did in their heyday of the early 2000s. Heat maps, A/B tests: they all looked impressive and science-y, but they had incredibly small sample sizes and drew conclusions about things like ideal layouts that the data did not justify.
                                                                  2. The research is often divorced from context. People expect user interfaces to look and behave a certain way based on their prior experiences with technology. As technology changes, so do their expectations. Think, for example, about the proliferation of viewport sizes. Remember when almost everyone was using either a 800x600 or 1024x768 screen?
                                                                  3. You can’t empirically research your way into good typography or harmonious color palettes, either. Certainly, science can help us compare the legibility of fonts (which also have mutable, albeit slower moving expectations) or identify accessibility issues. But it can’t tell us how to set up the best grid for the information we’re trying to convey or what color palette best fits a brand.

                                                                  There will always be a subjective element to user interface design. It can be exhausting to keep up with design fashion. But every once in a while, there are designers who create simple but lasting designs that inspire multiple generations. The Vignellis covered this at length in their canon. User interface design needs such designers now, but they need to deeply understand the fluid nature of user interfaces running on a variety of touch and on-touch devices and driven by dynamic data. Much of today’s graphic design is still stuck in a print mindset.

                                                                  1. 1

                                                                    I’ll join you in feeling grumpy about that.

                                                                2. 2

                                                                  I think the design changes were to make iOS apps look less awful on macOS.

                                                                  For some reason, Apple is desperate to bring iOS apps to Mac. They’ve tried Catalyst, which was a failure, mostly because iOS design language was too different from macOS’s. Apparently Apple thought it was worth to throw away the very refined Aqua for a thin iOS-like macOS skin just to “fix” this mismatch.

                                                                  Apple also brought iOS apps directly to M1 Macs. They’re still alien with bad UX, and the implementation is half-assed. And now Macs and iPads run the same hardware and bootloader.

                                                                  I suspect the current unpolished macOS design exists only because all of it is just a half-way step towards… I don’t know what. I hope they do :)

                                                                  1. 1

                                                                    For some reason, Apple is desperate to bring iOS apps to Mac.

                                                                    The number of iOS apps dwarfs the number of Mac apps. Convergence means instant and ongoing access to a ton of apps, and makes it more likely for people to buy a Mac. I think it’s pretty obvious that eventually iOS, iPadOS, and macOS will be unified into appleOS and run across all devices.

                                                                    As long as I can still get a shell prompt, I won’t really care…

                                                                    1. 3

                                                                      Unfortunately I disagree, they’re pretty obviously trying to merge as much as the “backend” of the OS as possible, but I don’t think they will merge the frontends entirely for the forseable future.

                                                                      Exploiting their monopoly on the app store makes them too much money to bring the macOS frontend features to iOS/iPadOS.

                                                                      Limiting the MacOS frontend to the app store would be disastrous for developer market share.

                                                                      Making app store exclusivity be the only difference would be begging for anti trust law to stop them from making money on the app store.

                                                                      As a result they “need” to keep them separate for the foreseeable future, until something happens to upset the status quo. For example Epic winning big in their anti trust case, or some government regulator stepping up and doing the same.

                                                                      1. 3

                                                                        iPadOS was split from iOS quite recently and is diverging, so I’m not convinced.

                                                                        1. 1

                                                                          Only in terms of marketing. iPadOS and iOS are the same codebase with different features enabled/disabled. I think Apple would love it if macOS were the same way.

                                                                  1. 11

                                                                    All that and it’s not mobile friendly.

                                                                    1. 1

                                                                      Looks great in reader mode on my mobile!

                                                                      1. 3

                                                                        Having to switch to reader mode is ironic, though.

                                                                        1. 1

                                                                          The search box is not available in reader mode. If it’s that hard to use one input, imagine a whole form.

                                                                      1. 1

                                                                        Great article. Touches on fragment shaders, linear algebra, and approaches to graphics programming challenges in general. Pity the demo link is broken. Definitely subscribing to this blog.

                                                                        1. 9

                                                                          The best interview I had was at the place I work right now. Absolutely no leetcoding. It was entirely verbal. 4 rounds:

                                                                          • Intro call, discussing the role, compensation, etc.
                                                                          • Chat with the lead engineer about my previous work (somewhat technical)
                                                                          • Chat with my potential teammate—this was technical, but mostly just open ended questions. I interviewed for an SRE role, so “what would you do in this scenario” or “how would you approach this problem”, type of questions. I didn’t prep at all for this. Felt like a friendly chat, really.
                                                                          • Final round—culture fit. My interests, what I think of the product, etc.

                                                                          That said, we also do take-home challenges for software engineering roles and I think that’s perfectly fine, as opposed to whiteboarding. A small task, something that can be solved in about 2–3 hours and not more; a call a few days after to discuss your solution.

                                                                          I’ve seen some companies have a “work day” interview, where you work with the team you’re interviewing for, for a day—starting from the standup call, doing your assigned task, and a review at the end of the day. This is a great way to assess the candidate’s fit with the rest of the team—that’s what ultimately matters at the end, anyway.

                                                                          1. 6

                                                                            That said, we also do take-home challenges for software engineering roles and I think that’s perfectly fine, as opposed to whiteboarding.

                                                                            I hate take home assignments with a passion. They’re so much worse than whiteboarding.

                                                                            Either they’re so small they don’t show any more than a whiteboard session would, or they’re massively disrespectful of my time. And because the interviewer isn’t in the room with the candidate, it’s easy to do the latter: the time investment is one sided.

                                                                            Many good candidates with jobs, families, and other commitments don’t have a day to be screened by every company they want to talk to.

                                                                            1. 5

                                                                              Basically, I wish companies offered both options. I get really nervous during interviews and would rather just spend a few hours beforehand working on the assignment, so I can come to the interview confident and ready to talk about my solutions. As long as the scope is reasonable I don’t mind it too much. For other candidates who might not have time, there should definitely be a self-contained option.

                                                                              1. 3

                                                                                I have gone through home assignments twice in my career. Both have asked for me to bill my time and paid it with a fair rate. I think that is the only correct way to do home assignments. Others should be rejected on the spot.

                                                                                I spoke to a company the beginning of this year and they were doing an open source product. Their assignment was basically “pick one of these GitHub issues and send us a PR.” The issues were somewhat trivial - fixes in config file parsing [1], that mostly aimed at showing you could find your way around the code. It looked like half a day’s work. I rejected the company for other reasons and ended up not doing the work on that assignment, but I liked the setup nonetheless.

                                                                                Many good candidates with jobs, families, and other commitments don’t have a day to be screened by every company they want to talk to.

                                                                                Those were not screens - I have gone through the screening steps beforehand. I agree that having day long screens is terrible. Unfortunately the alternative isn’t much better. I’ve went through “two hour” coding/algorithms screening tasks and, if you add the prep for that, you can easily land at half a day’s work. Yes, I hate HackerRank “exercises”.


                                                                                [1] I know many people are reluctant to do “free” and “real” feature work for a company that they have not started actually working for, but in this case it was obviously not something that was a core product feature, and, as I already mentioned, was paid in full. If you are asked to do free work, run away and never look back.

                                                                                1. 1

                                                                                  I think that is the only correct way to do home assignments.

                                                                                  If they’re asking you to complete real company-related tasks I agree. However, what if they’re asking you to do the type of meaningless problems you might find in an interview? Suppose we’re past the screening stage. Normally candidates aren’t paid for their time in a regular onsite interview, beyond travel accommodations, so I don’t see why it should be any different for a project done at home.

                                                                                  1. 4

                                                                                    Normally candidates aren’t paid for their time in a regular onsite interview,

                                                                                    Normally an onsite interview costs the company engineer hours. A take home interview costs the company nothing. When interviewing costs nothing, the tendency is to throw shit at the wall and see what sticks. This isn’t theoretical – I’ve seen people say “we don’t have the bandwidth for interviewing, let’s do take-home assignments before we bring them in.”

                                                                                    The money isn’t enough to make a difference in my life – but its a signal that my time’s not being wasted.

                                                                                    They’re still an interview format I find to be an unpleasant time sink compared to whiteboarding, but that’s a personal preference. If a company is paying, it at least indicates they’re trying to be thoughtful.

                                                                                    1. 1

                                                                                      A take home interview costs the company nothing.

                                                                                      That is not true at all in my experience. The company doesn’t have someone sitting in the room with you, so the time cost is invisible to you as a candidate, but someone has to evaluate your code after you submit it. I did a bunch of that at my last job and doing a thorough code review including coming up with followup questions typically took more of my time than an in-person interview slot would have. And we always had at least two reviewers for each submission.

                                                                                      1. 1

                                                                                        and doing a thorough code review

                                                                                        Replace ‘thorough’ with ‘superficial’ for all but the few applications you like the most, and you’ve got the approach I’ve generally seen taken (or advocated) with take-home interviews.

                                                                                        There are certainly places where this isn’t the case. Paying for the interview is a way to convince me that you’re one of those places.

                                                                                        1. 2

                                                                                          We do blind reviews. No resume. Just whether it’s product or platform, as those tests are different. I always read every word written and often take the time to figure out how much work would take to get it working. I approach it much like a random github project or PR. Is this something I can build off of? Can I drop it in and use it? Or is it mostly there and I can quickly fix a bug? On the other side of the spectrum, do I have to do most of the problem to get it working? Do I have trouble understanding how to even get started running or even reading the code?

                                                                              2. 1

                                                                                I see you’re working on some open source projects. Out of curiosity, what proportion of interviewers ask you to walk them through your contributions?

                                                                                1. 2

                                                                                  It came up at most of the smaller companies.

                                                                                  However, the open source code I write is also not usually directly relevant to the work that I would be doing (intentionally, for a number of reasons, including being in a small niche, avoiding non-compete issues and burnout).

                                                                                  The better interviews I’ve done have been a mix of whiteboarding system architecture, and pairing on debugging and writing a project.

                                                                                  But the thing that really makes interviews fun is having a competing offer in hand. Highly recommend it as a stress reduction strategy.

                                                                              3. 1

                                                                                Chat with my potential teammate—this was technical, but mostly just open ended questions.

                                                                                This approach works well for me both as an interviewer and as an interviewee when coupled with a code walkthrough. E.g. Walk us through a recent project. What does it do? How’d you start it? What was the hardest part to write? What did you learn? What would you do differently? What were the most frustrating limitations of the tools you used? Many of these questions lead to follow-up questions and then settle into an illuminating, non-adversarial conversation. Open source or otherwise unrestricted code is preferable, but a take home assignment suffices as a substitute.

                                                                                1. 1

                                                                                  A small task, something that can be solved in about 2–3 hours and not more; a call a few days after to discuss your solution.

                                                                                  This was part of our interview process as well. After a brief phone interview we would have the candidate login to a remote system (this was all on clean VMs). You would do this alone. Then we’d call back and have the candidate walk through the code that they wrote, explaining the design choices made and their solution. This was for a web based software development shop so we had front end / back end specific tasks.

                                                                                1. 5

                                                                                  I’ve seen the problem of code formatting commits messing up git histories play out across multiple codebases and languages. It makes Unison’s ideas of an append-only repos with separate definition files containing their pure syntax trees really compelling.

                                                                                  1. 2

                                                                                    It’s an interesting idea, but won’t really cover stuff like renaming Id to ID, or changing if foo to if !foo “for clarity”, and all sorts of other kind of pointless changes I’ve seen people make.

                                                                                    1. 5

                                                                                      Unison’s definition files contain references to hashes of the syntax tree instead of names. This keeps the names of things separate from what they are. So the former case is covered, but the latter (where two syntax trees are logically identical but structurally different) is not.

                                                                                      1. 2

                                                                                        changing if foo to if !foo

                                                                                        Aren’t you inverting the logic on that one?

                                                                                        all sorts of other kind of pointless changes

                                                                                        Not sure if this is what you meant, but fwiw I don’t consider making changes to enforce consistent style pointless.

                                                                                        1. 3

                                                                                          It’s just a little example of reorganizing logic (i.e. inverting the branches or some such). I didn’t feel like typing out an entire code block so I hoped that would be clear, but I guess it wasn’t 😅

                                                                                          I don’t consider making changes to enforce consistent style pointless.

                                                                                          I don’t really agree; I mean, if it’s one change on some code you’re already touching: sure, why not, if it’s not too invasive. But I’ve seen people go through entire code bases for weeks with changes like this, resulting in hundreds of changes that improve very little, or can even regress things and introduce bugs.

                                                                                          I’m probably one of the more consistent programmers, or at least am among the people I’ve worked with over the years. I don’t understand why people can’t just be consistent; it’s not that hard people.

                                                                                          But at the same time, it’s really not all that important. There are about 100 things more important.

                                                                                          1. 5

                                                                                            Yeah, I’ll make the opposing case, but the specifics of how it’s done really matter. The costs of inconsistency became clear to me over the last few years working at my current company, where we have a code base shared among multiple teams.

                                                                                            First, re:

                                                                                            There are about 100 things more important. In the narrow, immediate sense, you are obviously right. The thing is that there are non-immediate social costs. People will argue about this stuff. There will be comments in PRs. The same discussions will be had over and over in different ways, among different people, and with every new hire.

                                                                                            why people can’t just be consistent; it’s not that hard people.

                                                                                            Some people just are and care, and some people don’t. Value judgements aside, that’s just how it is, empirically. And even among the consistent people, they have different preferences. This is the genius (and motivation) of gofmt: End all discussion, formatting decisions are automated.

                                                                                            It’s really, imo, the only solution for any language: agree on a style guide, and automate enforcement as much as possible in your CI pipeline. And then bring your current codebase up to date, with the kind of commits we’re talking about here. The long term benefits are really big, and they really do matter when social costs are factored in.

                                                                                            1. 5

                                                                                              gofmt still leaves plenty of room for discussion; it says nothing about naming, line length (mostly), shadowed variables, and a host of other things. It only resolves a few minor issues that no decent programmer really cares about IMHO (it really doesn’t matter where the braces are).

                                                                                              That was my point with my first comment: you can automate fairly simple things like brace placement and whitespace, but there are far more things you can’t really automate. All of what I described earlier was actually at a Go shop. There was an extensive discussion about whether there should be a blank line between if err != nil and the statement that assigned the error for example. And then we had the “panicIf()` guy.

                                                                                              I do love gofmt btw, but mostly so I can just write if foo==""{bar()}, save, and it just formats correct. It’s a good productivity booster.

                                                                                              The long term benefits are really big, and they really do matter when social costs are factored in.

                                                                                              I can’t say I really see the benefits, and certainly wouldn’t call them “really big”. Unless the formatting is a complete mess and/or crazy (e.g., indentation that’s just wildly off, 4 statements on a single line, etc.) which is a different matter, I can’t say I think it really matters all that much to be honest. I worked with plenty of inconsistent codebases over the years, and never really struggled with that aspect specifically (even when I was much more junior).

                                                                                              It’s mostly just making things look nice, which is nice, but in my experience the benefits are slim to non-existent. Certainly not “really big”.

                                                                                              I think there probably is a correlation between “consistent and well formatted code” and “good code in general” (i.e. good architecture, tests, sane logic, documented, etc.) and I think people are confusing these things. You can’t really measure or quantify any of these much more important things, but you can get a 100% score on a linter tool. A good example might be this that I happened to mention on HN yesterday:

                                                                                              Once they showed me the “best” code and it was a mess. Okay, that’s not a show-stopper as there could be perfectly valid reasons for that. Then they showed me the “worst” code and it was actually a bit better, but the guy said “yeah, it’s not good; we’re missing public/private on a lot of classes”.

                                                                                              1. 3

                                                                                                gofmt still leaves plenty of room for discussion; it says nothing about naming, line length (mostly), shadowed variables, and a host of other things.

                                                                                                Agreed, it would be nice if it did even more.

                                                                                                It’s mostly just making things look nice, which is nice, but in my experience the benefits are slim to non-existent. Certainly not “really big”. Again, the benefits are not that as much in the code itself as in avoiding the social bike shedding and all the associated time wasters. And I’d happily take what imo was sub-optimal choices just to have all such discussions done away with. “We’re doing stuff this way, now let’s move on”

                                                                                                I think there probably is a correlation between “consistent and well formatted code” and “good code in general”

                                                                                                I’m glad you brought this up. I meant to and then forgot. I think this a key point, because when I hear the argument you’re making (“there’s more important things than this”), it’s often (practically speaking) an excuse for being sloppy. I’m not accusing you of that, and you’ve already said you are yourself consistent/neat, but the argument can be a smokescreen.

                                                                                                Also, the argument seems to posit some mythical programmer whose priorities are so honed that they don’t waste time on consistent naming or “pretty” formatting, yet nevertheless spend all their energy on important decisions, like (say) good high-level architecture, excellent performance, and correctness. Their code might be messy, but in every way that matters it’s exemplary. I mean, I just don’t buy it. People don’t work like that. In rare cases, maybe? But I’ve literally never come across it. On the other hand, sloppiness in little things is ime a strong signal that code will have problems that do matter.

                                                                                                Ok, you might say, but getting people to care about formatting isn’t going to magically fix those important things. And that’s true. But working in a culture where quality at every level is valued is a signal people will pick up on, and respond to. It sets up incentives for good, careful work, whereas “just letting it go” sets up the reverse incentives.

                                                                                                1. 1

                                                                                                  gofmt still leaves plenty of room for discussion; it says nothing about naming, line length (mostly), shadowed variables, and a host of other things.

                                                                                                  Agreed, it would be nice if it did even more.

                                                                                                  I’m not sure what more it could reasonably do without also introducing a lot of constraints that would be quite limiting?

                                                                                                  Also, the argument seems to posit some mythical programmer whose priorities are so honed that they don’t waste time on consistent naming or “pretty” formatting, yet nevertheless spend all their energy on important decisions

                                                                                                  Not at all; it’s just that obsessing over these things is a bit silly. And priorities do matter; resources are always finite, and it can make a huge difference how you spend those finite resources.

                                                                                                  On the other hand, sloppiness in little things is ime a strong signal that code will have problems that do matter.

                                                                                                  Yeah exactly: these things are correlated but there is no direct causal relationship. And even this correlation is far from absolute as I’ve also seen messy codebases that were actually pretty good overall.

                                                                                                  But some people act like there is a causal connection, and this is not the case.

                                                                                                  1. 1

                                                                                                    It’s unclear to me if we disagree when the rubber meets the road. Practically, my position is:

                                                                                                    1. You should have a style guide and follow it. Enforce everything you can in your CI. You can’t answer everything, but you document decisions about what people spend time talking or arguing about, so that doesn’t become a perpetual time-waster.
                                                                                                    2. When new things come up, don’t shrug them off, but come to a consensus and document your decision. Everyone knows “the way we do things here.” You won’t be able to do this perfectly, but you can do it pretty well.

                                                                                                    If your position is that none of those things are worth the bother, then yeah, we just disagree on that point. Otherwise, what is your alternative recommendation? If it’s “c’mon, enough talk, everybody just be reasonable” – then my experience says that does not work.

                                                                                                    1. 1

                                                                                                      You should have a style guide and follow it. Enforce everything you can in your CI. You can’t answer everything, but you document decisions about what people spend time talking or arguing about, so that doesn’t become a perpetual time-waster.

                                                                                                      As long as you just stick to the basic stuff; I guess? If you style guide becomes more than a page: meh. I don’t think it’s super important. There are a few key issues (tabs vs. spaces, camelCase vs. snake_case) that are useful of important, but this is a very short list and beyond that 🤷

                                                                                                      And there are serious downsides too; before you know it you end up with silly stuff like:

                                                                                                      print("Some sentence the linter thinks is too long by one"
                                                                                                            "character")
                                                                                                      

                                                                                                      And/or # nolint comments all over the place. That’s absolutely not better.

                                                                                                      Never mind the hours of lost time just to make some linter in the CI happy. How does that compare to the lost time from slightly inconsistent formatting? Probably not in favour of the CI-lint approach.

                                                                                                      Does it improve the code base? I guess, a little. But at what cost? It’s a good example of the politician’s fallacy: “this is horrible, we must do something. This is something. Therefore, we must do this.” Sometimes the solution is worse than the problem.

                                                                                                      Certainly in a company setting the entire thing is a bit of a quick fix to reduce the power of a small number toxic assholes who will insistently try to hammer through stupid pointless style changes “because it’s not my favourite way!” This can be dealt with much easier by telling these people to stop being such assholes, or firing them if they don’t (because almost invariably these people will be difficult to work with in general).

                                                                                                      When new things come up, don’t shrug them off, but come to a consensus and document your decision. Everyone knows “the way we do things here.” You won’t be able to do this perfectly, but you can do it pretty well.

                                                                                                      95 out of 100 times you can just see “how things are done here” by looking at the existing code, certainly when it comes to style issues, and a wee bit of inconsistency really isn’t that big of a deal.

                                                                                                      If it’s “c’mon, enough talk, everybody just be reasonable” – then my experience says that does not work.

                                                                                                      If someone is trying to force through some minor inconsequential thing then this is very much a valid response IMHO. And if it doesn’t work then it says a lot about those people.

                                                                                    1. 6

                                                                                      The title is clickbaity, but the substance sets a good bar for maturity in a software engineer. I have encountered engineers who hoarded information and essential tools, thereby holding their employer hostage and inducing varying degrees of paralysis upon their departure. To work with such people is to suffer. To clean up after them is despair.

                                                                                      1. 3

                                                                                        As of this writing, tail-call elimination is in the ES6 spec for JavaScript

                                                                                        The way I remember it, tail call elimination was added to ES6 in 2015, Google implemented it in Chrome, but then reverted the change due to problems this caused for developers: https://v8.dev/blog/modern-javascript#proper-tail-calls.

                                                                                        1. 2

                                                                                          Thanks for the link! I hadn’t heard why Chrome put TCO implementation on hold nor about their syntactic tail calls proposal. Interesting.

                                                                                          1. 4

                                                                                            It’s slightly worse than that link makes it appear. There are two things that make tail-call elimination difficult to implement correctly:

                                                                                            If the calling convention imposes stack cleanup requirements that contradict tail calls. For example, if the callee is required to clean up the stack then the function doing the tail call can set up the new callee’s argument frame and everything is fine. If the caller is responsible for cleaning up the stack then you can only tail call functions that require the same amount of stack space or less for arguments. This is the thing that makes tail call elimination difficult in C, because supporting variadic functions means that the caller must be responsible for cleaning up the stack (the callee doesn’t know how many arguments there are and so doesn’t have enough information to do the cleanup). This should not be a problem for JavaScript, because the JIT is in complete control of the calling convention and can simply pick one that is amenable to the optimisation (technically, all JavaScript functions are variadic, but in practice the JIT can optimise for the cases where the caller and callee agree on the argument count and have a slow path for the first time it encounters a mismatch).

                                                                                            The second problem is more common in higher-level languages: does your language expose any details of the call stack to introspection? In Smalltalk, there isn’t a stack in the abstract machine, there is a linked list of garbage-collected activation records. Closures capture the activation record where they were created and so can outlive the call frame. More importantly (in this context), the current activation record is exposed into the language as thisContext and you can write code that inspects this and follows the chain to go and inspect all of the higher-level activation records. This lets you implement complex exception-handling logic entirely in a library, for example. The Chrome link talks about the fact that JavaScript exceptions capture the stack, which makes tail-call elimination painful for debugging. In non-standard (but widely supported, though deprecated) JavaScript, there is also a caller property on each function, which contains a pointer to the most recent caller on the stack. I don’t know if this is the only other way of doing stack introspection in JavaScript, but even the simple thing of throwing and catching an exception in the same function and then using it to inspect the captured stack (as hinted to in the Chrome proposal) means that it’s very difficult to implement tail-call elimination in a way that doesn’t alter the programmer-visible language semantics.

                                                                                            In general, a language has to pick one out of tail-call elimination and stack introspection. C/C++ are in a somewhat interesting middle ground where both the mechanism for stack unwinding and the presence of tail-call elimination are both implementation defined and so any given compiler / runtime can pick the one it wants.

                                                                                        1. 3

                                                                                          I am the only one to be worried that ReactJS is becoming an Operating System? The new “transition” stuff look much like linux’ nice. I am wondering if ReactJS or Facebook is not hacking its way to fame with a lot of “it is fast and it needs to be fast” to attract developers, when in fact it does nothing to help people write good code that is fast.

                                                                                          I mean, diff + patch algorithm and declarative approach to GUI is great already. I am not sure a “scheduler” is necessary to make applications go fast.

                                                                                          ref: https://github.com/reactwg/react-18/discussions/41

                                                                                          1. 4

                                                                                            I am not sure a “scheduler” is necessary to make applications go fast.

                                                                                            What, apart from scrapping the virtual DOM, do you believe is necessary to make applications go fast in React? The React folks have been talking about scheduling and other performance improvement concepts for years, and with good reason. For large quantities of nested components, virtual DOM diffing can be very expensive.

                                                                                            Imagine an application with a search feature. The search input, filter, and sort features update the results as they’re changed. Each row contains some mixed HTML content and some buttons that do various interactive things. There’s also a reorder feature that lets the user update the items’ indices in their array by dragging and dropping them into new positions. This is just one example of where some frameworks really bog down, both in the initial rendering and in subsequent updates.

                                                                                            I mean, diff + patch algorithm and declarative approach to GUI is great already.

                                                                                            Conceptually, I love this approach too. Rendering a new virtual DOM rather than mutating the existing one has (at least prior to hooks) forced developers to familiarize themselves with immutable data structures and think more declaratively about component rendering. However, React is responding to real performance concerns. They’re also responding to the competition, some of which, like Vue, take a moderately different approach to their data binding architecture by using proxies to intercept data mutations and trigger more a selective kind of DOM reconciliation. Some, like Elm and Svelte, take a radically different approach by pre-compiling DOM updates. All of these approaches yield significant improvements over React.

                                                                                          1. 2

                                                                                            There’s something disingenuous about the author’s line of reasoning, which goes something like this:

                                                                                            1. If you’re complaining about type coercion in JavaScript or using strict equality ( ===) by default, you’re wrong.
                                                                                            2. Type coercion can be good sometimes, e.g.: '' == false. (His opinion, not mine).
                                                                                            3. Type coercion can be bad sometimes, e.g.: '' == 0. (Again, his opinion.)
                                                                                            4. Instead of using strict equality, just change the language… but maybe not because such changes have a “1e-9” chance of being accepted by TC39.
                                                                                            5. OK, the actual solution is for someone else to create a language called ProperScript that compiles to JavaScript and is the same as JavaScript with the addition of a "use proper" pragma that redefines type coercion to suit his opinion of how type coercion ought to work.

                                                                                            This is not the first time I’ve encountered a blog article or a tweet by this author with an abrasive presumption of ignorance on the part of the reader. How convenient that our collective malady can be ameliorated with the purchase of his book.