1. 10

    This is an excellent, empathic piece of writing. I cut my professional teeth in a strictly-XP pairing environment, but always working remote (shared tmux workplace). I even wrote a little thing on the topic of remote pairing, but this post really shines light on the totalitarian, cultish aspect of the practice when done in-person.

    I think what made my experience pleasurable was that I was pairing with people who I considered (and still consider) close friends and mentors, making my workdays very enjoyable. I can imagine that having to pair program day-in, day-out with people you dislike really burns you out. Much like creating music, if you’re doing it with someone with whom you have some kind of emotional affinity, the process can continue fruitfully for a very long time. But you just can’t play/create well with people you feel antipathy for, it’s absolutely draining. At least in my experience.

    1. 4

      Sometimes I feel like I write just as much shell and Make as I do other languages. I’ve really dug into Make in the last ~3 years and just recently had the necessity to enmakify some Python projects.

      I belong to a school of thought that Make is the base build tool, the lowest common denominator. It’s already available on macOS, Linux, etc. out of the box and trivial to install on Windows. I’ve worked at companies with thousands of developers or with just dozens, with varying skill levels, familiarity with a particular ecosystem’s tooling, patience for poor or missing onboarding documentation, and general tolerance for other team’s preferences ranging from scared to flippant.

      Regardless, nearly all of them can figure out what to do if they can run make help and are presented with a menu of tasks that ideally are self-configuring entirely. As in, I clone a repo, run make help and see that there’s a test task that runs tests. I should then be able to run make test and… tests run. It may take some time to set up the environment — install pyenv, install the right Python version, install dependencies, etc. – but it will inevitably run tests with no other action required. This is an incredibly straightforward onboarding process! A brief README plus a well-written Makefile that abstracts away idiosyncracies of the repo’s main language’s package manager, build system, or both, can accelerate contributors, even if your Makefile is as simple as:

      help:
      	@echo Run build or test
      build:
      	npm build
      	sbt build
      test:
      	npm test
      	sbt test
      

      My base Pythonic Makefile looks like this now, not guaranteed to work because I’m plucking things. I’m stuck on 3.6 and 3.7 for now but hope to get these projects up to 3.9 or 3.10 by the end of the year. I’m using Poetry along with PyTest, MyPy, Flake8, and Black.

      # Set this to ~use it everywhere in the project setup
      PYTHON_VERSION ?= 3.7.1
      # the directories containing the library modules this repo builds
      LIBRARY_DIRS = mylibrary
      # build artifacts organized in this Makefile
      BUILD_DIR ?= build
      
      # PyTest options
      PYTEST_HTML_OPTIONS = --html=$(BUILD_DIR)/report.html --self-contained-html
      PYTEST_TAP_OPTIONS = --tap-combined --tap-outdir $(BUILD_DIR)
      PYTEST_COVERAGE_OPTIONS = --cov=$(LIBRARY_DIRS)
      PYTEST_OPTIONS ?= $(PYTEST_HTML_OPTIONS) $(PYTEST_TAP_OPTIONS) $(PYTEST_COVERAGE_OPTIONS)
      
      # MyPy typechecking options
      MYPY_OPTS ?= --python-version $(basename $(PYTHON_VERSION)) --show-column-numbers --pretty --html-report $(BUILD_DIR)/mypy
      # Python installation artifacts
      PYTHON_VERSION_FILE=.python-version
      ifeq ($(shell which pyenv),)
      # pyenv isn't installed, guess the eventual path FWIW
      PYENV_VERSION_DIR ?= $(HOME)/.pyenv/versions/$(PYTHON_VERSION)
      else
      # pyenv is installed
      PYENV_VERSION_DIR ?= $(shell pyenv root)/versions/$(PYTHON_VERSION)
      endif
      PIP ?= pip3
      
      POETRY_OPTS ?=
      POETRY ?= poetry $(POETRY_OPTS)
      RUN_PYPKG_BIN = $(POETRY) run
      
      COLOR_ORANGE = \033[33m
      COLOR_RESET = \033[0m
      
      ##@ Utility
      
      .PHONY: help
      help:  ## Display this help
      	@awk 'BEGIN {FS = ":.*##"; printf "\nUsage:\n  make \033[36m\033[0m\n"} /^[a-zA-Z0-9_-]+:.*?##/ { printf "  \033[36m%-15s\033[0m %s\n", $$1, $$2 } /^##@/ { printf "\n\033[1m%s\033[0m\n", substr($$0, 5) } ' $(MAKEFILE_LIST)
      
      .PHONY: version-python
      version-python: ## Echos the version of Python in use
      	@echo $(PYTHON_VERSION)
      
      ##@ Testing
      
      .PHONY: test
      test: ## Runs tests
      	$(RUN_PYPKG_BIN) pytest \
      		$(PYTEST_OPTIONS) \
      		tests/*.py
      
      ##@ Building and Publishing
      
      .PHONY: build
      build: ## Runs a build
      	$(POETRY) build
      
      .PHONY: publish
      publish: ## Publish a build to the configured repo
      	$(POETRY) publish $(POETRY_PUBLISH_OPTIONS_SET_BY_CI_ENV)
      
      .PHONY: deps-py-update
      deps-py-update: pyproject.toml ## Update Poetry deps, e.g. after adding a new one manually
      	$(POETRY) update
      
      ##@ Setup
      # dynamic-ish detection of Python installation directory with pyenv
      $(PYENV_VERSION_DIR):
      	pyenv install --skip-existing $(PYTHON_VERSION)
      $(PYTHON_VERSION_FILE): $(PYENV_VERSION_DIR)
      	pyenv local $(PYTHON_VERSION)
      
      .PHONY: deps
      deps: deps-brew deps-py  ## Installs all dependencies
      
      .PHONY: deps-brew
      deps-brew: Brewfile ## Installs development dependencies from Homebrew
      	brew bundle --file=Brewfile
      	@echo "$(COLOR_ORANGE)Ensure that pyenv is setup in your shell.$(COLOR_RESET)"
      	@echo "$(COLOR_ORANGE)It should have something like 'eval \$$(pyenv init -)'$(COLOR_RESET)"
      
      .PHONY: deps-py
      deps-py: $(PYTHON_VERSION_FILE) ## Installs Python development and runtime dependencies
      	$(PIP) install --upgrade \
      		--index-url $(PYPI_PROXY) \
      		pip
      	$(PIP) install --upgrade \
                                           		--index-url $(PYPI_PROXY) \
                                           		poetry
      	$(POETRY) install
      
      ##@ Code Quality
      
      .PHONY: check
      check: check-py check-sh ## Runs linters and other important tools
      
      .PHONY: check-py
      check-py: check-py-flake8 check-py-black check-py-mypy ## Checks only Python files
      
      .PHONY: check-py-flake8
      check-py-flake8: ## Runs flake8 linter
      	$(RUN_PYPKG_BIN) flake8 .
      
      .PHONY: check-py-black
      check-py-black: ## Runs black in check mode (no changes)
      	$(RUN_PYPKG_BIN) black --check --line-length 118 --fast .
      
      .PHONY: check-py-mypy
      check-py-mypy: ## Runs mypy
      	$(RUN_PYPKG_BIN) mypy $(MYPY_OPTS) $(LIBRARY_DIRS)
      
      .PHONY: format-py
      format-py: ## Runs black, makes changes where necessary
      	$(RUN_PYPKG_BIN) black --line-length 118 .
      

      Is this overkill? Maybe, but I can clone this repo and be running tests quickly. I still have some work to do to actually achieve my goal of clone-to-working-env in two commands — it’s three in git clone org/repo.git && make deps && make test right now – but I’ll probably get there in the next few days or weeks. Moreover, this keeps my CI steps as much like what developers run as possible. The only real thing that has to be set in CI are some environment variables that Poetry uses for the make publish step, plus setting the version with poetry version $(git describe --tags) because git describe versions are not PEP-440 compliant without some massaging and I’ve been lazy doing that when our published tags will always be PEP-440 compliant.

      The Brewfile:

      # basic build tool, get the latest version
      # if you want to ensure use, use 'gmake' instead on macOS
      # or follow caveats in `brew info make` to make make brew's make
      brew 'make'
      
      # python version and environment management
      brew 'pyenv'
      # python dependency manager
      # a version from pypi instead of homebrew may be installed when running make deps
      brew 'poetry'
      

      The full pyproject.toml is an exercise left to the reader but here’s the dev-dependencies selection from one of them:

      [tool.poetry.dev-dependencies]
      flake8 = "3.7.9"
      black = "19.10b0"
      mypy = "^0.812"
      pytest = "^6.2.2"
      pytest-html = "^3.1.1"
      ansi2html = "*"
      pytest-tap = "^3.2"
      pytest-cov = "^2.11.1"
      pytest-mypy = "^0.8.0"
      lxml = "^4.6.2"
      

      Suggested improvements welcome. I’ve built Makefiles like this for Scala, Ruby, Rust, Java, C, Scheme, and Pandoc projects for a long time but feel like Make really is like Othello: a minute to learn, a lifetime to master.

      1. 3

        Regardless, nearly all of them can figure out what to do if they can run make help and are presented with a menu of tasks that ideally are self-configuring entirely. As in, I clone a repo, run make help and see that there’s a test task that runs tests. I should then be able to run make test and… tests run. It may take some time to set up the environment — install pyenv, install the right Python version, install dependencies, etc. – but it will inevitably run tests with no other action required.

        This is the reason we use Makefiles on my teams. Like you say, it’s the lowest common denominator, the glue layer, which enables things like multi-language projects to be managed in a coherent manner. I’d much rather call out to both npm and gradle from the Makefile, than use weird plugins to shovel npm into gradle or vice-versa. Makefiles are scripts + a dependency graph, so you can do things like ensuring particular files are in place before running commands, and this is not just about build artifacts, but also files downloaded externally (hello chromedriver).

        I have Makefiles in all my old projects, and they are a real boon when I need to make changes after a long time (like years) has passed. I also make it a habit to always plug this make tutorial whenever the topic of Make comes up. It’s a stellar tutorial.

        To quote Joe Armstrong: “Four good tools to learn: Emacs, Bash, Make and Shell. You could use Vi, I am not religious here. Make is pretty damn good! I use Make for everything, that good!”

        1. 2

          Could this 114 line Makefile be a gist or something?

        1. 1
          • x if I’m going to be deleting it shortly;
          • simon-deleteme if it might stick around for a while.
          1. 2

            I just stick it in /tmp, and let the system delete it on restart.

          1. 6

            A good high-level intro, but for folks truly new to make, I always recommend this tutorial. It starts off easy but also goes in depth.

            1. 17

              Paul Graham knows he’s smarter than most people (that’s not hard, most of us in here are, statistically), and he still thinks this fact makes him right all the time. He’s rich and has a big audience, so this is unlikely to change.

              Not sure why this post needs to exist, but definitely can’t see why it needs to be here.

              1. 9

                PG reminds me of a pattern I’ve seen in other people. That is, a person experienced in one particular area develops the idea they are experienced in many other areas. They then proceed to share their wisdom of these areas, when in reality they know jack shit. The way these people write or speak can make it difficult to distil bullshit from facts.

                When encountering such people, especially if they develop a cult following like PG, I think it’s entirely reasonable to call such people out. The article posted may beat around the bush a bit too much, but it provides many good examples. As such, I think it’s existence is entirely valid.

                1. 5

                  Engineer’s disease? The amusing part to me, is that it reminds me of that trope in movies “oh, you’re a scientist? clearly you’re a polymath” - except it’s real.

                  1. 1

                    Yeah, I agree - when i was younger and finding my footing in tech i was quite taken with PG (and i certainly do feel like spending some time learning lisp has made the functional programming paradigm more intuitive to me) but it has been valuable to me to see critiques of his work as well, especially in trying to apply the things he actually was expert at to unrelated fields. He certainly was personally successful - more so than most people criticizing him, I’m sure (but things aren’t necessarily fair), but it can be helpful to point out that at some point he just stopped being very relevant.

                    …but I personallly still don’t like java, and prefer lispy FP to heavy handed OOP.

                  2. 4

                    because for sure in this audience there are people that would take his expertise in programming as a source of authority on other topics (pretty much like most people do with celebrities advocating for a cause) and maybe it’s useful to remind them, with terms they can understand, that this is magical thinking a few rich people use to steer the whole sector.

                    1. 3

                      They will not rest until they cancel https://timecube.2enp.com/

                    1. 3

                      That’s interesting, though Ł and Ó for me are the most easy to write special letters from all letters, and they require just one hand to write them. When testing charset conversion I’m often using magic words like “łóżko” (“bed”) because “łó” is so easy to write ;), but of course not every hand is built the same way, so I understand that for some people it’s different.

                      One can’t also forget about the layout used in PN-I-06000:1997 – the “typist’s” (214) layout, which also seems to group special letters on one side of the keyboard, though on the right side. But also it’s QWERTZ-based, so it’s a little bit different.

                      1. 1

                        This used to be the same for me, but recently it’s gotten very hard to get keyboards with a short spacebar, with the right alt in a place reachable by the thumb. When both my noppoo chocs broke I looked around for a keyboard with a similar layout and couldn’t find one. So I caved and got a Logi G keyboard, which is really great for typing, but the right alt key is unfortunately so far to the right that I can’t reach it with my thumb.

                        The typist’s layout is too alien for me, and not very convenient for programming. I don’t really like having to tweak defaults, so I tried to get used to the regular Polish keyboard, but I don’t want to get RSI (again).

                        I’m secretly hoping this layout variant catches on and perhaps in a couple of years might be included in Windows. One can dream.

                      1. 4

                        Hi Lobsters! I know there are quite a few Polish speakers and keyboard tweakers here. If you’re in the center of that Venn diagram, this keyboard layout might interest you. I was sick of contorting my right thumb all the time while typing in Polish, so I remapped the L,N,O keys on my machine. It’s been pretty sweet so far so I thought I’d share. (*typo)

                        1. 2

                          I like the overall idea, but I’m unclear on something.

                          Part 1 says:

                          Even logging invalid data could potentially lead to a breach.

                          I can’t think how that would be the case.

                          Also, the example of that is:

                          log_error(“invalid_address, #{inspect address}”)

                          In the reworked example, you show

                           {:error, validation_error} ->
                              log_error(validation_error)
                              return_error_to_user(validation_error)
                          

                          But validation_error contains (a couple levels deep) an input: field with the original input. So wouldn’t it have the same problem?

                          1. 1

                            Yeah, I totally agree that we’re cheating here! This is a design tension that we’re not sure how to resolve. On one hand, we don’t want to expose unsanitized inputs to the caller, while on the other we’d love to log examples of payloads that cause the parser to fail. (For auditability).

                            Do you have any pointers (or links to resources) on ways to resolve this tension? There’s always the option of “defanging” the original input by base64-encoding it, etc., but perhaps there’s a more elegant way out?

                          1. 1

                            A couple of years ago, I slapped together a couple of modules in python for taking nested JSON documents and pulling out slices of data for loading into a data warehouse. I’m happy with the general idea, but I have been wanting to refactor the implementation to separate out some of the concerns and improve flexibility. Everything is too bound up, too opinionated.

                            If you had a JSON document for a film like so:

                            {
                              "id": 1,
                              "title": "Titantic",
                              "cast": [
                                {"talent_id": 1, "name": "DeCaprio", "role": "Jack"},
                                {"talent_id": 2, "name": "Winslet", "role": "Rose"}
                              ],
                              "release_dates": [
                                {"location": "US", "date": "1997-12-19"},
                                {"location": "CA", "date": "1997-12-20"}
                              ]
                            }
                            

                            Then you could write schemas like so:

                            film = {
                              "id": Field("titleId", int),
                              "title": Field("title", String50),
                              "country_of_origin": Field("originalCountry", NullableString50),
                            }
                            
                            cast = {
                              "id": Field("titleId", int),
                              "cast": {
                                "talent_id": Field("talentId", int),
                                "name": Field("talentName", String100),
                                "role": Field("role", String100),
                              }
                            }
                            
                            release_dates = {...}  # you get the picture
                            

                            Which would result in dictionaries like:

                            films = [{"titleId": 1, "title": "Titanic", "originalCountry": null}]
                            
                            cast = [
                              {"titleId": 1, "talentId": 1, "talentName": "DeCaprio", "role": "Jack"},
                              {"titleId": 1, "talentId": 2, "talentName": "Winlet "role": "Rose"},
                            ]
                            

                            I built some plumbing around deserializing the document, passing it to a series of schemas, pulling out the record instances, and serializing each instance to it’s own location. If any subcomponent failed, I’m failing out the whole set of records to ensure the database has a logical view of the entity. Overall, it’s worked pretty well for well organized, consistently typed JSON data. Unfortunately, there is a lot of nasty JSON data out there and it can get pretty complex.

                            I suppose this is a long way of saying that this article gives me a couple ideas of how I might decouple some of this logic. Are you going to be discussing building entire structs in the next time? Or are you looking at a per field perspective?

                            Looking forward to the next article!

                            1. 1

                              Hey, thanks for the great feedback! Yeah, we’re going to be building entire structs – if you take a look at the previous post, at the end (“Under the Hood”) there’s a snippet that uses a Data.Constructor.struct/3 to specify parsers for the particular fields. The next installment is going to be about how to make your struct parsing more flexible: for example, if you have a big flat JSON object coming in, but want to use it to create a nested hierarchy.

                              In general, we’re taking a fractal approach of composing smaller parsers to create larger ones. The ‘struct’ parser constructor is a complex combinator with some specific semantics, but it’s fundamentally similar to the list/1 combinator. So yeah, to answer your question, we will be BOTH constructing entire structs and looking at it from a per-field perspective. It all comes together in the end.

                              1. 1

                                Awesome! Look forward to reading about it!

                            1. 2

                              For stateless validations (must be a number between 0 and 100), this is a nice approach. For stateful validations (this e-mail address has already been taken), it should probably be a two-stage process–unless we want to put filesystem/database/etc calls inside our parsers, which seems like a terrible idea.

                              1. 3

                                Yes, putting some kind of IO (service-call/db/etc) inside a parser would be terrible. I try to tackle stateful validation problems like this:

                                1. Model the syntactically-valid data type, and use a parser to “smart-construct” it. So in this case we’d have an %EmailAddress{}. This data type doesn’t tell us anything about whether the email has been claimed or not.

                                2. Down the line, when(if) we actually need to work with email addresses that are unclaimed, we give the service responsible for instantiating them expose a function typed:

                                @spec to_unclaimed_email_address(
                                  %EmailAddress{}) :: Result.t(%UnclaimedEmailAddress, some_error())
                                

                                This function does the necessary legwork to either create a truly unclaimed email address, or tell you that it’s not possible with the data you brought it. It still conforms to the ‘railway-oriented style’, but at another level of the architecture.

                                Of course this opens up another can of worms in terms of concurrency, but that’s state for you.

                              1. 11

                                My hard-earned response to this is: just don’t do it. There is a minefield of gotchas under Mnesia, and they will maim you and your production system. Mnesia was built for configuration management, not for OLTP. You wouldn’t suggest using Apache Zookeeper as a production database, why suggest mnesia?

                                1. 4

                                  I’d be interested in some examples of gotchas, documentation references, and the like, if you have specific references handy. I know of a few basic ones like the disc copies vs ram only vs disc only, but I haven’t used it enough to encounter other gotchas. I hold most of that knowledge from reading books or documentation.

                                  1. 4
                                    1. Two-phase commit.

                                    2. Since Mnesia detects deadlocks, a transaction can be restarted any number of times. This function will attempt a restart as specified in Retries. Retries must be an integer greater than 0 or the atom infinity. Default is infinity.

                                    This is from: http://www1.erlang.org/documentation/doc-5.1/lib/mnesia-4.0/doc/html/mnesia.html
                                    In practice this means that a big transaction can be preempted in perpetuity by an onslaught of smaller transactions on (a subset of) the same data.

                                    1. When you get “Mnesia is overloaded” warnings in production. At 4 am.

                                    2. Bad performance on sync transactions -> you move to async -> then move to async_dirty. Now you could have simply be optimistically !-ing to other nodes’ ets-owning processes without the headaches of mnesia cluster setup.

                                    Most oldtime erlangers have good mnesia stories, talk to them and be amazed :)

                                  2. 2

                                    This discussion is timely for the thing I’m currently building. I had already come to the conclusion that Mnesia had too many gotchas for me to handle but I’m still hesitating between using lbm_kv and going the riak_core+partisan route. Both options seem built on top of Mnesia, iirc riak uses a patched version of Mnesia.

                                    I got thousands of long-running processes updating their state every few seconds. This will soon take too much memory (because of process heap size) and later on it will have to be distributed anyway. My idea was to store state as native Erlang terms (to preserve read/write perfs and be responsive enough), and have drastically less processes than one per “state” with a pool of workers that would update the store instead and be released to the pool to move to the next thing to do. I also think that it will make it easier to move to distributed later on.

                                    Do you have thoughts on this?

                                    1. 2

                                      riak did not use Mnesia. neither does partisan’s version of riak_core, iirc.

                                      1. 1

                                        I think the Whatsapp scaling video has some details on this. Not sure if it is 100% relevant to you but it is worth a try.

                                        https://www.youtube.com/watch?v=FJQyv26tFZ8

                                        1. 2

                                          I rewatched last week and it’s not that relevant IMO but thanks anyway :)

                                          The thing is, my current challenge is going from a single node to distributed, their challenge was to overcome the practical limits of the maximum number of nodes in a cluster (fully connected mesh), so basically going from a ~1000 nodes cluster to >10k nodes cluster. What I’m working on is too specific to ever reach close to 1000 nodes but unfortunately still a bit too big to stay on a single node (or to be more precise: we could probably scale up but one machine with loads of ram is more expensive than a few smaller machines).

                                          Still a great talk for people interested in pushing things to the extreme!

                                          1. 2

                                            Yes, sorry I could not be more useful. I think the riak_pg might be still useful to you when going from 1 node to N. Or maybe you are not trying to go this route either. Anyways, if you write a blog post about your experience solving this problem I would be happy to read about it. I need to jump this hoop soon with my pet project so it is relevant to me.

                                            https://github.com/cmeiklejohn/riak_pg

                                      2. 2

                                        This is a fascinating assertion. Thanks for chiming in, I’ll read up a bit more.

                                      1. 13

                                        I couldn’t upvote this fast enough. The now-dominant consensus and ideology of UX, whatever we’re calling it, is making astonishing progress at destroying every shred of reasonable consistency and well-tested convention across every interface surface I can think of aside from, maybe, the terminal and CLI tooling.

                                        1. 5

                                          The now-dominant consensus and ideology of UX, whatever we’re calling it,

                                          That one is easy: if people know what they’re talking about, they call it HCI. If they don’t know what they’re talking about, they call it UX. It’s a great filter word. As soon as someone starts talking about UX, you immediately know that they have no understanding of cognitive psychology, won’t be able to cite any research to back up their assertions, and have are highly unlikely to have any opinions worth listening to.

                                          Good usability is hard. It’s a global optimisation problem. The worst thing that’s happened to software in the last two decades is the rise in people who think usability is an art and not a science.

                                          Anyone thinking of designing an interface (including an API, so pretty much anyone writing any software) should read The Humane Interface. Some of the things that Raskin says are a bit dated (for example, his discussion of Fitts’ Law doesn’t cover how the concepts apply to touchscreens) but most of them are still good guiding principles (especially Raskin’s First Law: a program may not harm a user’s data or, through inaction, allow a user’s data to come to harm).

                                          1. 5

                                            And any UI designer reading this will be so triggered they will raise their shields and never consider returning to the past.

                                            I doubt there’s an effective way of getting the point through that these modern UIs are garbage.

                                            1. 3

                                              CLI tooling is not exempt, either. See: a million node js tools that joyfully barf ANSI escape sequences or ‘reactive’ progress bars into their output, even when it’s not a TTY.

                                            1. 1

                                              I’ve had 3 Noppoo Choc, 2 Mid and 1 Mini, and a Mid has been my daily driver at work, the one I am currently using for the last 7 years. It’s the best mechanical keyboard I have ever owned, but my experience is not that vast. I might buy my coworker’s Ducky One, because it has a numpad and he wants to get something else anyway, but maybe I’m not enough of an keyboard aficionado to care that much - I prefer MX black but would like to try out reds for a prolonged time. That’s the important thing for me, blue is completely out and browns are ok for gaming.

                                              This list is still up to date: https://f5n.org/blog/2018/mechanical-keyboards/ but I backed https://www.kickstarter.com/projects/keyboardio/atreus and am waiting for the summr now.

                                              1. 1

                                                I have 2 noppoo choc’s (one brown and one blue), and while I absolutely love the layout, both have been acting up lately, with double (or triple) key activations. Blowing into the depressed switches with compressed air helps for a while, but then the glitching returns. ‘E’, ‘R’, ‘T’ and the space bar are the worst offenders.

                                                There seems to be an absolute death of compact mechanical keyboards with a short space bar (space bar should extend from beneath the ‘C’ key and end flush with the ‘M’ key). I would love a mechanical keyboard with the same layout as a thinkpad or dell xps: short spacebar and all F keys easily accessible without mode switches.

                                                1. 1

                                                  Haha nice, if I had a new backup keyboard I’d send you my Choc Pro for you to add to your collection.

                                                1. 4

                                                  What does the expression at the end mean?

                                                  Looking at the manual it looks like | is Max/Or. I also saw that |/ is defined as “Max-Over”?

                                                  1. 7

                                                    It’s the K implementation of the algorithm above. Find the max of a list of numbers.

                                                    1. 1

                                                      Ah, for some reason I thought it was a pun to the effect of “hah, you noobs”.

                                                    2. 6

                                                      | is max. It’s also boolean or. If you wanted the minimum, it’d be &/, because & is min/boolean and.

                                                      The APLs have teased apart lots of common operations into atomic parts that combine cleanly, sometimes unpacking them further than other languages go. The single-argument form of & (“where”) is a good example:

                                                        &1 2 3
                                                      0 1 1 2 2 2
                                                      

                                                      It counts up, repeating each successive number based on the next number in the argument.

                                                        &5 5 5
                                                      0 0 0 0 0 1 1 1 1 1 2 2 2 2 2
                                                      

                                                      Okay, so that makes the pattern clearer. By why is that useful?

                                                        & 0 0 0 1 0 1 1 0 1 0
                                                      3 5 6 8
                                                      

                                                      Ah ha – “what are the offsets of the 1s?”

                                                        x:10?!1000    / draw 10 random numbers 0 to 999
                                                        x             / print them
                                                      379 998 594 106 191 686 123 845 495 700
                                                        x < 500       / what values are less than 500
                                                      1 0 0 1 1 0 1 0 1 0
                                                        x[&x<500]     / slice x by indices where x is less than 500
                                                      379 106 191 123 495
                                                      

                                                      So it combines with a conditional to become a sort of SELECT, but it also combines with other operators in a predictable way, and the implementation is straightforward.

                                                      1. 1

                                                        Thank you! I was stumped as to what the where usage of & is for. This is a great explanation.

                                                    1. 5

                                                      I agree 100% with the author, and day-to-day I use my emacs color scheme that’s designed to emphasize the “information-dense” parts of code: function declarations (not callsites!) and comments.

                                                      One problem I see is that there are languages/methodologies out there that assume your editor will de-emphasize comments, for example. I think the case for minimalist syntax highlighting (or none whatsoever) can only be made in good faith in the case of information-dense languages/coding styles.

                                                      1. 7
                                                        1. 4

                                                          There is something to this the author did not point out. Typing faster is less frustrating than slow typing when you feel the flow. Faster typing enables faster use of tools, even if the mouse were occasionally involved.

                                                          Depending on your programming language and tools, faster typing allows you to “think out loud” in code and tests. It may even drive you to make the dev cycle faster.

                                                          Pair programming is more fun and efficient when you don’t spend time on slow typing, and eventually you may even talk while writing out the code, like a singing instrumentalist.

                                                          And in a lot of work, any task really isn’t that unique. Doodle some diagrams and start prototyping by typing fast.

                                                          Contemplate old code and fix it by typing fast.

                                                          Even if the seconds saved never amount to much real time, there’s an energy to it that should not be discounted.

                                                          1. 3

                                                            I agree with all your points – there is a certain energy that abounds when someone masters their tools and can operate with fluency in the physical world. I personally don’t think it should be a subject of controversy that proficient use of the primary computer input device makes one a more proficient user of the computer – it’s easier to get into a flow state, it’s easier to use tooling, and it’s easier to communicate with others.

                                                            If you’re a faster typist, you might find it easier to spare a couple seconds writing that email to a remote colleague that might just flip their day from bad to good. You might spare a couple seconds to comment that tricky bit of code before moving on. You might spare a couple seconds typing out repetitive test cases instead of DRYing them prematurely and making others cry when they have to add functionality. If typing feels like a slog, you’re going to be more stingy with written communication in general.

                                                            But, I also have a meta-comment about topics like this. People who aren’t fast typists will be the ones piping up in the comments about how typing speed doesn’t matter, and those who type well will say that it does matter. Confirmation bias is a thing, and ultimately I think it comes down to aesthetics. I know a couple of programmers who are atrocious typists—to the point that watching them type is painful—but are excellent at programming and ultimately very productive. Maybe they just don’t feel they need the ‘energy’ that comes with mastery of physical movement?

                                                            1. 3

                                                              These are some great additions to the discussion. Having less of that barrier as you said may also result in you being more likely to communicate when you otherwise wouldn’t have because it’d have been such a slow and tedious response as a slower typist. Stuff like this is very difficult to measure because essentially it all happens in the mind and perhaps unconsciously for many.

                                                              I also agree with the point about confirmation bias. Of course I say you should type faster because I do type relatively quick. My colleagues who type quick also say this. My colleagues who don’t type quite so fast tend towards finding it less important. Either way it could be post ad hoc rationalization.

                                                            2. 3

                                                              This is a great point and highlights how I feel, and I didn’t include it in the article!

                                                              For example; I find it very difficult to even get into a flow state when typing on my phone, where I probably get something like 50 WPM, simply because it feels so frustrating. Even talking to people on my phone via typing feels so frustrating I often can’t be bothered.

                                                              Whereas on a real keyboard, I’m far more likely to take the time to reply to people or write more complicated responses because that barrier is so much less existent.

                                                            1. 5

                                                              I knew bookmarklets were a thing, but never really considered them.

                                                              I just replaced a browser extension with one.

                                                              1. 3

                                                                Dotepub is a bookmarklet that generates epubs from webpages, which I use often to get away from the shiny computer screen and read blogs on my kindle.

                                                                It’s one of my favorite pieces of software.

                                                                I think bookmarklets are very underappreciated.

                                                                1. 2

                                                                  One of my favorite bookmarklets is one that scans the page for text that looks like a base64 encoded image and replaces it with an img tag.

                                                                  I use this to view screenshots from build failures on CI platforms like travis which don’t support artifacts (only text logs).

                                                                1. 5

                                                                  Why advocate using rufo over RuboCop? I’ve never heard of rufo before this blog post, but RuboCop is almost a standard in Ruby lint tools at this point. In the same vein, Growl has long since been abandoned in favor of terminal-notifier, which uses macOS Notifications and doesn’t require a huge confusing paragraph telling you “NOT TO BUY GROWL (but here is the app store link anyway)”.

                                                                  Additionally, I’m a huge advocate for using your package manager the way it was supposed to be used, and to that point, disagree with pinning dependencies unless it’s a fix for a package author not knowing how to distribute with SemVer. If you’re going to pin dependencies, why not just make a script instead of using those “big and bulky” tools like Yarn and Bundler? Loop over every gem and run gem install, loop over every NPM package and run npm add. There’s literally no point to using Yarn or Bundler if you’re just going to pin every dependency anyway. That’s the whole point of package managers, taking some of that work away from you so you don’t have to constantly think about patch version upgrades during your development process.

                                                                  1. 2

                                                                    I just switched to terminal-notifier and updated the article with it, removing the Growl part. I added a note to rubocop and plan to switching to it at some point when I feel the need. As for pinning dependencies, I added a note also that it can be a touchy subject and found a good article summarizing pro/cons of different strategy as for specifying versions: https://thoughtbot.com/blog/a-healthy-bundle.

                                                                    Your comment is linked at some point in the article, thanks again.

                                                                    1. 1

                                                                      Hey, thanks for reading and for your comments here. While I appreciate you taking the time to do so, the way you wrote the comment (tone, content) is not something I am used to. It’s not welcoming enough for me to be willing to engage in a conversation on the various items and feedback you gave.

                                                                      1. 2

                                                                        Hey, welcome to the site!

                                                                        While it’s often hard (given the low-bandwidth nature of textboxes-on-the-screen vs. in-person communication), please try to engage in conversation with an assumption of good faith. I’ve seen more than once that when both participants make the effort, the tone of the discussion turns pleasant despite an initial roughness and disagreement.

                                                                        Congrats on your first post! Keep them coming!

                                                                        *edit: typo

                                                                    1. 4

                                                                      I really admire your work, @akkartik! I feel there is some conceptual overlap between mu and urbit (reboot of computing from ground-up, on top of purposely constrained primitives), but mu seems to be positioned firmly opposite urbit on the legibility & accessibility axis. I’m keeping my fingers crossed for its success.