1. 23

What are some of the new best practices/tooling (not specific products necessarily, but concepts that specific products implement) that you’ve used and enjoyed having in the past year or two? I understand this question is fairly broad.

I’m coming at this with a mostly SOA style, web-services company mindset, but please chime in even if you have something from a completely different arena.

For example, two things that we did over the past couple years in our organization that I really came to love are a widespread, enforced adoption of 1) the circuit-breaker pattern in communicating with network resources, courtesy of netflix’s hystrix library, and 2) strict Swagger API documentation (must have bijection between swagger endpoints and real endpoints).

Thanks!

  1.  

  2. 18

    A few off the top of my head:

    Monorepos (this one has been such an unalloyed victory that I am still amazed)

    Libraries > networked APIs (abstraction barriers belong in the language, not your transport layer)

    Doing frontend and backend in the same typed functional language (Haskell)

    Sharing types, serialization code between frontend and backend

    Using TAGS files instead of live (often slow/fragile) IDE servers for goto-definition functionality.

    Using a tags generator that understands your dependencies and makes goto-definition work for those too

    Using a SQL client library that lifts SQL syntax into your language and type system natively

    Using a SQL client library that enforces type-safety for primary keys, fields, and the like

    Using a web framework that enforces type-safety for URI generation and links

    Lenses and prisms for featherweight JSON/XML tests that don’t involve reading large blobs of data

    Build tool that manages your package set (dep versions) and compiler version as a coherent snapshot

    Adding -Werror to our build flags for Haskell projects

    1. 4

      I love using a monorepo at work, being able to read the code of other teams (especially when integrating) is a massive bonus :)

      I’m unsure about your second point, do you mean that libraries are better than networked API’s, or that you’ve moved from libraries to networked API’s?

      1. 2

        Don’t put something behind a networked API when it could be a library.

        1. 1

          I see, thanks :)

      2. 3

        Monorepos (this one has been such an unalloyed victory that I am still amazed)

        Can you go into this one a bit more? I assume you are talking about giant repos (like we hear Google uses) with all first party AND third party code imported into them. I would love to hear more about what results you have seen with this.

        1. 3

          Nah just first party, we use Stack’s git support for third party vendoring, when vendoring is needed. One exception is when we occasionally patch GHCJS, it’s not part of our git repo but gets yanked into tree sometimes.

          The repo itself is a git repository on Github.

          There’s one stack.yaml that resolves package versions for the entire company’s GHC Haskell projects.

          Results…not a lot to say other than it’s really nice having changes be one atomic PR, including for frontend where necessary. Do you have any specific questions?

          1. 2

            No specific questions I guess, but that was some good info – Thanks!

        2. 4

          Adding -Werror to your build flags

          This is a very bad practice if you actually “ship” your programs. The suckless-software ebuilds e.g. in Gentoo can have arbitrary CFLAGS passed to it by the user. If we actually enforced Werror the builds would very often fail due to the fact that gcc often just prints warnings when it compiles gotchas for different architectures and other things. Write stable code, and if you like it run -Wall -Wextra and go through all the warnings, but -Werror is a really really bad decision if you ship your code and want things to just work. Think of it this way: 99% of the warnings are only useful when you are developing the software, not when you are compiling releases.

          People who “emit” Werror as a “best practice” ™ are most of the time those who don’t have the discipline to go through the warnings and actually fix the mistakes they are making. But maybe you were just referring to the case where you are developing the program itself and really need strict warnings in place.

          1. 4

            It’s not C/C++, it’s for a Haskell web app we deploy privately, ourselves.

            It’s just a way to enforce warnings-clean discipline and unlike C/C++, you don’t really get warnings unless it really should just get cleaned up.

            1. 2

              Doesn’t really matter though, the point is that upgrading the compiler shouldn’t break the build unnecessarily. New warnings are fine, but breaking the build is like a spanner in the works rather than a speed bump.

              1. 9

                Nobody in the Haskell community recommends making it a part of your build flags for a library or something other people will build. Those that use it in dev, hide it behind a dev config flag.

                I don’t disagree with the point made in the appropriate context, but I’m irritated my comment is getting hijacked for beating up a strawman.

        3. 6

          I feel like this will surprise nobody, but then I realise that it would have surprised me two years ago, so here are mine:

          • Github PRs; rigorously tidied and rebased before review; integrated with CI tool (jenkins) and task tracker (jira).
          • Related: rigorous review (github tooling again).
          • Pairing (I work remote, Screenhero is fabulous). Pairing during review, especially when it gets interesting.
          • For iOS dev, Swift is bringing in a bunch of ideas that functional folk will recognise. I’ve particularly benefitted from making immutability a design goal (wherever possible).
          • RSpec-style tests with arbitrarily nested environments, and tests as small and expressively named as possible.
          • Writing tests so that someone else can read them, understand what they’re asserting, and agree that the assertion is business-logic-correct. (This seems simple and obvious. It isn’t!)
          1. 2
            • Forcing constraint validation into the database; I’ve written more about this here, https://kev.inburke.com/kevin/faster-correct-database-queries/

            • Using secretbox for two-way encryption; happy to say I’ve merged documentation examples for both a Node library and the Go standard library

            • Using hub fork / hub pull-request to open pull requests (instead of using the browser or my gitopen tool)

            • Preparing/running database statements when you start your app instead of always parsing them when queries run

            • Spending more time with programming languages that have good standard libraries and make it easy to benchmark (not Node)

            • Merging PR’s as a single commit, I’ve probably merged ~300-400 pull requests this year and maybe 3 had more than one commit in them

            • Always merging branches on HEAD, so commits are linear.

            • Purposely duplicating blocks of code until it gets painful to do so / the abstraction is more obvious

            • I’d probably use GRPC for a new project, or at least protobufs for sending messages between servers.

            1. 6

              Duplicating code until I’m more confident of the appropriate abstraction is something I’ve been doing more lately and I’ve found it really satisfying and interesting every time the eventual refactoring takes a different direction from my initial expectations.

              I’ve been tempted to try single commit PR’s for a while but have always refrained due to a fear of loosing useful information. On reflection, I can’t think of a single time this year where the individual commits have been useful so I could save a small amount of time and effort squashing rather than organising commits before submitting a PR.

              1. 2

                Speaking of single commits, kinda related, my former VP of engineering used to commit at the end of the day… and if the work wasn’t done or ready to be committed, he would just git reset –hard. He would redo all the work the next day, do it way faster, and often find better ways of doing it.