1. 9
    Summary

    This video uses as a motivating example a bug this person (fasterthanlime) once ran into when the feature flag management software they were using (LaunchDarkly) read the staging-environment feature flag values in production. That caused all experimental code to be turned on in production, which led to many customers being affected by bugs in the experimental code.

    After talking about the difficulty of debugging this, fasterthanlime talks about how it’s important to prevent errors automatically and not rely on humans to configure things correctly. They say this particular issue is unusually hard to prevent automatically. They note that the error could have been avoided if the LaunchDarkly API didn’t conflate the API key and the environment, but that LaunchDarkly would probably not be willing to change the API now. They eventually conclude that the easiest way to prevent that bug is to create a new feature flag called “environment” that just duplicates the name of the environment these feature flag values are for, then assert on application startup that the application environment matches the feature flag environment.

    1. 24

      I’m sympathetic to the goal of making reasoning about software defects more insightful to management, but I feel that ‘technical debt’ as a concept is very problematic. Software defects don’t behave in any way like debt.

      Debt has a predictable cost. Software defects can have zero costs for decades, until a single small error or design oversight creates millions in liabilities.

      Debt can be balanced against assets. ‘Good’ software (if it exists!) doesn’t cancel out ‘Bad’ software; in fact, it often amplifies the effects of bad software. Faulty retry logic on top of a great TCP/IP stack can turn into a very damaging DoS attack.

      Additive metrics like microdefects or bugs per line of code might be useful for internal QA processes, but especially when talking to people with a financial background, I’d avoid them, and words like ‘debt’, like the plague. They need to understand software used by their organization as a collection of potential liabilities.

      1. 11

        Debt has a predictable cost. Software defects can have zero costs for decades, until a single small error or design oversight creates millions in liabilities.

        I think this you’ve nailed the key flaw with the “technical debt” metaphor here. It strongly supports this “microdefect” concept, explicitly by analogy to microCOVID, which the piece doesn’t mention is named for micromort. The analogy works really well to your point: these issues are very low cost and then sudden, potentially catastrophic failure. Maybe “microcrash” or “microoutage” would be a clearer term; I’ve seen “defect” used for pretty harmless issues like UI typos.

        The piece is a bit confusing by relying on the phrase ‘technical debt’ while trying to supplant it, it’d be stronger if it only used it once or twice to argue its limitations.

        We’ve seen papers on large-scale analyses of bugfixes on GitHub. Feels like that route of large-scale analysis could provide some empirical justification for assessing values of different microdefects.

        1. 1

          I’m very surprised by the microcovid.org website not mentioning their inspiration from the micromort.

          1. 1

            It’s quite possible they invented the term “microCOVID” independently. “micro-” is a well-known prefix in science.

          2. 1

            One thing I think focusing on defects fails to capture is the way “tech debt” can slow down development,even if it’s not actually resulting in more defects. If a developer wastes a few days flailing because the didn’t understand something crucial about a system e.g. because it was undocumented, then that’s a cost even if it doesn’t result in them shipping bugs.

            Tangentially relatedly, the defect model also implicitly assumes a particular behavior of the system is either a bug or not a bug. Often things are either subjective or at least a question of degree; performance problems often fall into this category, as do UX issues. But I think things which cause maintenance problems (lack of docs, code that is structured in a way that is hard to reason about, etc) often work similarly, even if they don’t directly manifest in the runtime behavior of the system.

            1. 1

              Microcovids and micromorts at least work out in the aggregate; the catastrophic failure happens to the individual, i.e. there’s no joy in knowing the chance of death is one in a million if you happen to be that fatality.

              Knowing the number of code defects might give us a handle on the likelihood of one having an impact, but not on the size of its impact.

            2. 3

              Actually, upon re-reading, it seems the author defines technical debt purely in terms of code beautification. In that case the additive logic probably holds up well enough. But since beautiful code isn’t a customer-visible ‘defect’, I don’t understand how monetary value could be attached to it.

              1. 3

                I usually see “tech debt” used to describe following the “no design” line on https://www.sandimetz.com/s/012-designStaminaGraph.gif past the crossing point. The idea is that the longer you keep on this part of the curve, the harder it becomes to create or implement any design, and the ability to maintain the code slows.

                1. 1

                  I think this is the key:

                  For example, your code might violate naming conventions. This makes the code slightly harder to read and understand which increases the risk to introduce bugs or miss them during a code review.

                  Tech debt so often leads to defects, they become interchangeable.

                  1. 1

                    To me, this sounds like a case of the streetlight effect. Violated naming conventions are a lot easier to find than actual defects, so we pretend fixing one helps with the other.

                2. 3

                  I think it’s even simpler than that: All software is a liability. The more you have of it and the more critical it is to your business, the bigger the liability. As you say, it might be many years before a catastrophic error occurs that causes actual monetary damage, but a sensible management should have amortized that cost over all the preceding years.

                  1. 1

                    I think it was Dijkstra who said something like “If you want to count lines of code, at least put them on the right side of the balance sheet.”

                  2. 2

                    Debt has a predictable cost

                    Only within certain bounds. Interest rates fluctuate and the interest rate that you can actually get on any given loan depends on the amount of debt that you’re already carrying. That feels like quite a good analogy for technical debt:

                    • It has a certain cost now.
                    • That cost may unexpectedly jump to a significantly higher cost as a result of factors outside your control.
                    • The more of it you have, the more expensive the next bit is.
                    1. 1

                      especially when talking to people with a financial background, I’d avoid them, and words like ‘debt’, like the plague

                      Interesting because Ward Cunningham invented the term when he worked as a consultant for people with a financial background to explain why code needs to be cleaned up. He explicitly chose a term they knew.

                      1. 1

                        And he didn’t choose very wisely. Or maybe it worked at the time if it got people to listen to him.

                    1. 20

                      IMO the “clickbaity” original title frames the piece better than the toned down version.

                      1. 8

                        It is as though authors put thought into how they title their works…

                        1. 3

                          For reference, at the time of the above comment, the title had been changed from the original “A terrible schema from a clueless programmer” to “Normalizing a database schema” due to user suggestions.

                          I suggest adding the culture tag in addition to changing the title back, since this article is mostly about developer attitudes rather than database normalization.

                        1. 5

                          “Great research on colorschemes and contrast guidelines”, you say? #000 background and blue links?

                          1. 5

                            Above all, do no harm ;)

                            1. 6

                              The bg should be lighter, between #111 and #222 or something. And I would have chosen a warmer link color, orange or yellow - green even for that old terminal feeling. If they absolutely have to use blue, at least make it lighter.

                              1. 6

                                The research kevinc did included resources like the WCAG contrast guidelines. Do you have some you could point to on how these changes would improve readability?

                                1. 7

                                  The WCAG 2.1 contrast guidelines is about minimum contrast. A bg color of #000 and a fg color of #FFF would get full score in that respect. My point is that the contrast is too much and that dark mode is about making it easy on the eyes.

                                  I understand that I’m being a bit picky here and I do see that you have put a lot of work into this, but compared to popular dark mode color themes or applications defaulting to dark mode, they usually have contrast ratio at 10-12, not 16.

                                  And using very saturated colors for text against a dark background will make the colors vibrate and make the text hard to read. This is kind of dark mode 101 and the reason for my initial snarky comment - which I apologize for :) An easy fix is to desaturate the color and make it lighter.

                                  That being said, I do appreciate you taking the time to implement this!

                                  1. 5

                                    Thanks for that. I took the tactic of keeping the same contrast ratios for different cases of black and white text as in light mode. WCAG AA had to do with the least visible text, for instance, the footer links and upvote counts were darkened in light mode to hit 4.5:1. Colored text did become paler for dark mode, just not by a lot.

                                    By now many people have cited body text contrast being too high, and I’m inclined to agree that it doesn’t need to be as high in dark mode as in light mode. Whether to lighten the background has more to do with fitting into the OS environment alongside other dark mode apps and sites. Up to now that was a non-goal; I put the phone dark mode experience first. But I should not be surprised that so many Lobste.rs users prefer dark mode on their desktops in the daytime. :)

                                  2. 3

                                    Most people’s physiology perceives lower contrast between black and blue than between black and other primary colors. I do appreciate the effort you and kevinc made and I get that design by committee is super frustrating generally not a recipe for success. But it does take tweaking to get a color palette to fit a perceptual model better than RGB.

                                    Sources:

                                  3. 3

                                    My goal for this PR was not to invent a new color palette but to choose dark-background appropriate variants of the existing palette. Entirely new colors were just out of scope this time.

                                    Lightening the background is not an unexpected request, but we will want reasons. For examples of what we’ve thought through, check the pre-merge discussion in the PR.

                                    1. 8

                                      “A dark theme uses dark grey, rather than black, as the primary surface color for components. Dark grey surfaces can express a wider range of color, elevation, and depth, because it’s easier to see shadows on grey (instead of black).

                                      Dark grey surfaces also reduce eye strain, as light text on a dark grey surface has less contrast than light text on a black surface.”

                                      Material guidelines

                                      I find it hard to read white on black, as it looks like headlights on a pitch black night to me, and I can’t see the text clearly, but I know it’s also the case that others need more contrast. When I’m reading dark-on-light, I need more contrast.

                                      With a ‘softer’ look than black + white, the user should theoretically be able to set higher contrast as an option in their OS, but I have no idea how widely this is supported. I’ve just tried it with duckduckgo in Safari on MacOS and it did seem to work - though I’m not sure the page did anything itself.

                                      “Prefer the system background colors. Dark Mode is dynamic, which means that the background color automatically changes from base to elevated when an interface is in the foreground, such as a popover or modal sheet. The system also uses the elevated background color to provide visual separation between apps in a multitasking environment and between windows in a multiple-window context. Using a custom background color can make it harder for people to perceive these system-provided visual distinctions.”

                                      Apple guidelines

                                      I’m not sure you can find/use the system colours on the web.

                                      Here’s a desktop screenshot with lobste.rs visible - notice that it’s the only black background on the screen.

                                      1. 2

                                        Thanks for these details. In particular that screenshot is helpful.

                                        It’s true that on your phone at night, the site may be your only light source. A goal of mine was that if any site is suddenly too bright for you, it shouldn’t be this one. But on your desktop, the site shares a lit environment with your other windows. The most common background color is perhaps a bit driven by fashion, but that’s a fact of life, so let’s deal with it. It is probably worthwhile to get along with the other windows on our screens.

                                        Given that we already theme mobile CSS with a media query, what do you think about the phone use case?

                                        1. 5

                                          I don’t think you should rely on detecting mobile to target OLED screens. OLED screens are gradually becoming available on tablets and larger screens, and not all mobile screens are OLED. I’ve been trying to figure out a way to target OLED for web design and I don’t know of a good way to do it.

                                          It’s a shame that the prefers-color-scheme options are just light and dark, rather than e.g. light, dark, and black. It seems like some people want pure OLED black and some don’t, and I’m not sure you’ll ever get them to agree.

                                          Personally I’ve decided to err on the side of assuming everyone has OLED, because I’d like OLED screens to get the energy savings when possible, and I just personally like the aesthetic. If it’s good enough for https://thebestmotherfucking.website/ it’s good enough for me.

                                          1. 2

                                            Very dark gray is also efficient on OLED. It’s not a binary like #000 saves power and #222 is suddenly max power. I think power is just proportional to brightness?

                                            1. 2

                                              Interesting; I hadn’t known that about OLEDs. Your statement is correct according to this 2009 Ars Technica article citing a 2008 presentation about OLEDs:

                                              power draw varies pretty linearly with mean gray levels

                                              The article also has a table showing the power used by an OLED screen to display five example images of varying brightness. For mostly-white screens, OLEDs are actually less power-efficient than LEDs.

                                          2. 2

                                            I don’t usually have dark mode set on my phone but I’ve just tried it and… it’s surprising.

                                            The black background doesn’t have the same problem here. I can read text without it looking like headlights at night in the rain!

                                            Maybe it’s due to OLED, or maybe something to do with the phone seeming to dynamically adjust brightness? No idea but it’s fine.

                                        2. 2

                                          Thanks for the link. I’ve read through the discussion and I understand now that you have put some thoughts into this. That’s good enough for me :)

                                  1. 7

                                    I suggest replacing the programming tag with the compsci tag.

                                    This article doesn’t explain what it means by “Busy Beaver programs”. I had heard before of some busy beaver function that took a long but finite time to run, and I was confused why the article showed only control flow graphs and not code. It turns out that formally, in the busy beaver game, “a player should conceive a transition table aiming for the longest output of 1s on the tape while making sure the machine will halt eventually.” Busy beaver programs are a class of programs defined by their transitions; there is not just one, and they are not written in traditional programming languages.

                                    1. 3

                                      I suggested the tag historical. Can someone second that?

                                      1. 2

                                        I’m glad the tag was added given it is a very old argument. Nonetheless, I read the article hoping that it would be something novel like repurposing the concept keyword.

                                      1. 2

                                        It seems like the idea of putting a version number in a header was already well-established when IP was first defined. I wonder who was the first to come up with the idea of versioning?

                                        1. 3

                                          I imagine versioning 0.0.1 probably revolved around file.txt, file_2.txt, file_FINISHED.txt, file_FINISHED(1).txt, file_FINAL_REAL.txt ad infinitum…

                                          1. 9

                                            It’s quite possible they wrote the specifications on a DEC system with automatic file versioning. Probably more like DOC.TXT;69 incrementing as they make new versions.

                                            1. 2

                                              More information on DEC automatic file versioning: Retrocomputing Stack Exchange – Filesystems with versioning – answer

                                              1. 1

                                                Fun! I didn’t realize that was a thing!

                                          1. 4

                                            I posted this partially because of the new Unicode version, and partially as an answer to people who ask why Zig doesn’t have a built-in Unicode string type.

                                            My argument is that if you want to support Unicode, you have to do so knowingly.
                                            No built-in type can exempt you from that.

                                            1. 2

                                              My argument is that if you want to support Unicode, you have to do so knowingly. No built-in type can exempt you from that.

                                              From this comment, it sounds like Swift successfully exempts developers from thinking about Unicode – if they work on non-performance-sensitive programs. Swift’s abstraction over Unicode strings could lead to unexpectedly slow operations on certain strings, so I understand why Zig wouldn’t want that.

                                              To avoid the impression that Zig doesn’t support Unicode at all, I’ll note that though the Zig language doesn’t have a Unicode type, the Zig standard library has a std.unicode struct with functions that perform Unicode operations on arrays of bytes.

                                              Do you know if there are any plans to update std.unicode given the issues raised by the author of Ziglyph in this comment – that graphemes would be a better base unit than codepoints? I only just started trying to write Unicode-aware code in Zig, but after reading about the available libraries, I wish for Zigstr or something like it to replace std.unicode in the standard library. Otherwise, I worry about developers finding std.unicode in the standard library, using it to read strings a codepoint at a time, and thinking they’ve handled everything they need to.

                                              The comments I linked to were left on an issue that was closed because no Zig language changes were needed. Would it be well-received if I opened a new issue about updating the Zig standard library as I described above?

                                              1. 2

                                                I think that followup comment isn’t quite correct. Swifts strings cannot be indexed using a plain numeric index as in other languages. Instead, they are indexed using the String.Index type which must be constructed using the String instance in question, advanced and manipulated using String index methods. All this ceremony makes it rather obvious that it’s not an O(1) operation.

                                                1. 2

                                                  I added a comment to one of the threads you linked just now, and I will reproduce it here:

                                                  @jecolon thank you for your comments. Before tagging 1.0, I will be personally auditing std.unicode (and the rest of std) while inspecting ziglyph carefully for inspiration. If you’re available during that release cycle I would love to get you involved and work with you an achieving a reasonable std lib API.

                                                  In fact, if you wanted to make some sweeping, breaking changes to std.unicode right now, upstream, I would be amenable to that. The only limitation is that we won’t have access to the Unicode data for the std lib. If you want to make a case that we should add that as a dependency of zig std lib, I’m willing to hear that out, but for status quo, that is a limitation because of not wanting to take on that dependency.

                                                  In summary, std.unicode as it exists today is mainly used to serve other APIs such as the file system on Windows. It is one of the APIs that I think is far from its final form when 1.0 is tagged, and someone who has put in the work to make ziglyph is welcome to go in and make some breaking changes in the meantime.

                                              1. 6

                                                A slightly related Go nit, the case of structure members determines whether they’re exported or not. It’s crazy, why not explicitly add a private keyword or something?

                                                1. 19

                                                  why not explicitly add a private keyword or something?

                                                  Because capitalization does the same thing with less ceremony. It’s not crazy. It’s just a design decision.

                                                  1. 4

                                                    And limiting variable names to just “x”, “y” and “z” are also simpler and much less ceremony than typing out full variable names

                                                    1. 1

                                                      I’m not sure how this relates. Is your claim that the loss of semantic information that comes with terse identifiers is comparable to the difference between type Foo struct and e.g. type public foo struct?

                                                      1. 1

                                                        That is actually a Go convention, too. Two-letter or three-letter variable names like cs instead of customerService.

                                                    2. 6

                                                      This would be a more substantive comment chain if you can express why it’s crazy, not just calling it crazy. Why is it important that it should be a private keyword “or something”? In Go, the “or something” is literally the case sensitive member name…which is an explicit way of expressing whether it’s exported or not. How much more explicit can you get than a phenotypical designation? You can look at the member name and know then and there whether it’s exported. An implicit export would require the reader to look at the member name and at least one other source to figure out if it’s exported.

                                                      1. 7

                                                        It’s bad because changing the visibility of a member requires renaming it, which requires finding and updating every caller. This is an annoying manual task if your editor doesn’t do automatic refactoring, and it pollutes patches with many tiny one-character diffs.

                                                        It reminds me of old versions of Fortran where variables that started with I, J, K L or M were automatically integers and the rest were real. 🙄

                                                        1. 5

                                                          M-x lsp-rename

                                                          I don’t think of those changes as patch pollution — I think of them as opportunities to see where something formerly private is now exposed. E.g. when a var was unexported I knew that my package controlled it, but if I export it now it is mutable outside my control — it is good to see that in the diff.

                                                          1. 2

                                                            I guess I don’t consider changing the capitalization of a letter as renaming the variable

                                                            1. 2

                                                              That’s not the point. The point is you have to edit every place that variable/function appears in the source.

                                                              1. 3

                                                                I was going to suggest that gofmt‘s pattern rewriting would help here but it seems you can’t limit it to a type (although gofmt -r 'oldname -> Oldname' works if the fieldname is unique enough.) Then I was going to suggest gorename which can limit to struct fields but apparently hasn’t been updated to work with modules. Apparently gopls is the new hotness but testing that, despite the “it’ll rename throughout a package”, when I tested it, specifying main.go:9:9 Oldname only fixed it (correctly!) in main.go, not the other files in the main package.

                                                                In summary, this is all a bit of a mess from the Go camp.

                                                                1. 1

                                                                  It looks like rsc’s experimental “refactor” can do this - successfully renamed a field in multiple files for me with rf 'mv Fish.name Fish.Name'.

                                                          2. 5

                                                            The author of the submitted article wrote a sequel article, Go’ing Insane Part Two: Partial Privacy. It includes a section Privacy via Capitalisation that details what they find frustrating about the feature.

                                                          3. 4

                                                            A slightly related not-Go nit, the private keyword determines whether struct fields are exported or not. It’s crazy, why not just use the case of the field names saving everyone some keypresses?

                                                            1. 2

                                                              I really appreciate it, and find myself missing it on every other language. To be honest, I have difficulty understanding why folding would want anything else.

                                                              1. 2

                                                                On the contrary, I rather like that it’s obvious in all cases whether something is exported or not without having to find the actual definition.

                                                              1. 4

                                                                Totally pointless. Cmd + Ctrl + Space is the default key combo to bring up the character palette, you can then search for whatever character you want.

                                                                1. 2

                                                                  Oh, wow, I hadn’t realized the Character Viewer (the window opened by that shortcut) had gotten so much easier to use in more recent macOS versions – or did I never notice that top-right button that converts the floating palette into a popup near the cursor?

                                                                  Opening the Character Viewer as a palette window keeps keyboard focus in the text field you are in, so it requires a lot of mouse usage to search for and to insert the character you want. I see that after toggling the Character Viewer to the popup mode (the default mode on macOS 10.15), the search field is focused after pressing ⌃⌘Space, the arrow keys select a character, and I can insert the selected character with Return. That’s much more convenient.

                                                                  1. 2

                                                                    On newer Macs (at least the legend for it, it’s prob available or configurable), it’s just straight up bound to pressing fn.

                                                                    1. 2

                                                                      Thanks for the comment. I am going to update the article with this information.

                                                                      1. 2

                                                                        Sounds good. Sorry if I came across as harsh.

                                                                    1. 5

                                                                      Admit it. If you browse around you will realize that the best documented projects you find never provide that docs generated directly from code.

                                                                      Is this saying that you shouldn’t use Javadoc or pydoc or “cargo doc”, where the documentation is located in the source files? So, from the previous point, it’s essential that docs live in the same repo as the code, but not the same files as the code? Seems like a pretty extreme position relative to the justification.

                                                                      1. 18

                                                                        As a concrete example, Python’s official documentation is built using the Sphinx tool, and Sphinx supports extracting documentation from Python source files, but Python’s standard library documentation does not use it - the standard library does include docstrings, but they’re not the documentation displayed in the Standard Library Reference. Partially that’s because Python had standard library documentation before such automatic-documentation tools existed, but it’s also because the best way to organise a codebase is not necessarily the best way to explain it to a human.

                                                                        As another example in the other direction: Rust libraries sometimes include dummy modules containing no code, just to have a place to put documentation that’s not strictly bound to the organisation of the code, since cargo doc can only generate documentation from code.

                                                                        There’s definitely a place for documentation extracted from code, in manpage-style terse reference material, but good documentation is not just the concatenation of small documentation chunks.

                                                                        1. 1

                                                                          Ah, I was thinking of smaller libraries, where you can reasonably fit everything but the reference part of the documentation on one (possibly big) page. Agreed that docs-from-code tools aren’t appropriate for big projects, where you need many separate pages of non-reference docs.

                                                                        2. 10

                                                                          There’s definitely a place for documentation extracted from code, in manpage-style terse reference material, but good documentation is not just the concatenation of small documentation chunks.

                                                                          Can’t agree enough with this. Just to attempt to paint the picture a bit more for people reading this and disagreeing. Make sure you are thinking about the complete and exhaustive definition of ‘docs’. Surely you can get the basic API or stdlib with method arity and expected types and such, but for howtos and walkthroughs and the whole gamut it’s going to take some effort. And that effort is going to take good old fashioned work by technical folks who also write well.

                                                                          It’s taken me a long time to properly understand Go given that ‘the docs’ were for a long time just this and lacked any sort of tutorials or other guides. There’s been so much amazing improvement here and bravo to everyone who has contributed.

                                                                          On a personal note, the Stripe docs are also a great example of this. I cannot possibly explain the amount of effort or care that goes into them. Having written a handful of them myself, it’s very much “a lot of effort went into making this effortless” sort of work.

                                                                          1. 8

                                                                            Yeah I hard disagree with that. The elixir ecosystem has amazing docs and docs are colocated with source by default for all projects, and use the same documentation system as the language.

                                                                            1. 2

                                                                              Relevant links:

                                                                            2. 5

                                                                              The entire D standard library documentation is generated from source code. Unittests are automatically included as examples. It’s searchable, cross-linked and generally nice to use. So yeah, I think this is just an instance of having seen too many bad examples of code-based docs and not enough good ones.

                                                                              When documentation is extracted from code in a language where that is supported well, it doesn’t look like “documentation extracted from code”, it just looks like documentation.

                                                                              1. 4

                                                                                Check out Four Kinds of Documentation. Generated documentation from code comments is great for reference docs, but usually isn’t a great way to put together tutorials or explain broader concepts.

                                                                                It’s not that documentation generation is bad, just that it’s insufficient.

                                                                                1. 2

                                                                                  Maybe the author is thinking about documentation which has no real input from the developer. Like an automated list of functions and arguments needed with no other contextual text.

                                                                                1. 3

                                                                                  A corrected link to the announcement post: Announcing the Wheel Reinvention Jam!

                                                                                    1. 16

                                                                                      Well now, I had to do a double take after blindly opening Lobsters and seeing my own blog post on the front page!

                                                                                      Hopefully others can get some use out of this feature since I find it pretty nifty :)

                                                                                      1. 3

                                                                                        Is there any way to use environment variables in the condition? I keep my global git config in a git repo and I’d love to have a mechanism for conditionally including machine-specific overrides to some of the settings.

                                                                                        1. 5

                                                                                          It doesn’t look like Git’s conditional includes feature supports reading environment variables – it only supports these keywords:

                                                                                          • gitdir
                                                                                          • gitdir/i
                                                                                          • onbranch

                                                                                          However, the Environment section of the configuration docs lists some environment variables that you could set on specific machines to change Git’s configuration.

                                                                                          Per-machine configuration using GIT_CONFIG_GLOBAL

                                                                                          Of those environment variables, GIT_CONFIG_GLOBAL seems the easiest to use. You could use it by putting these files in your config repo:

                                                                                          • shared_config
                                                                                          • config_for_machine_1
                                                                                          • config_for_machine_2

                                                                                          Within each machine-specific config, use a regular (non-conditional) include to include shared_config:

                                                                                          [include]
                                                                                          	path = shared_config
                                                                                          ; Then write machine-specific configuration:
                                                                                          [user]
                                                                                          	email = custom_email_for_this_machine@example.com
                                                                                          

                                                                                          Finally, on each of your machines, set the environment variable GIT_CONFIG_GLOBAL to that machine’s config file within your config repo.

                                                                                          Setting a default config for other machines

                                                                                          If you want some machines to just use shared_config without further configuration, name that file config instead and make sure your config repo is located at ~/.config/git/. On those machines, you don’t need to set GIT_CONFIG_GLOBAL. This will work because $XDG_CONFIG_HOME/git/config is one of Git’s default config paths.

                                                                                          1. 1

                                                                                            Hmm, not that I know of off the top of my head but I’ve never actually sat down and read the Git documentation so I’d be surprised. You could perhaps look into templating your gitconfig using something like chezmoi? There’s always nix which comes up too but that’s quite a bit overkill just for fiddling with some dotfiles of course.

                                                                                            1. 1

                                                                                              I can work around it now by generating the file locally, it’s just been a mild source of annoyance for me that I need external support for this.

                                                                                          2. 1

                                                                                            Ah, a note for anyone trying this. The original version was missing a trailing slash on the end of the includeIf directive. I’ve just pushed a fix for the typo but if you copied it earlier and were having trouble, just a heads up.

                                                                                          1. 14

                                                                                            An interesting counter-example: batch compilers rarely evolve to IDEs and are typically re-written. Examples:

                                                                                            • Roslyn was a re-write of the original C# compiler
                                                                                            • visual studio I think uses different compilers for building/intellisense for C++
                                                                                            • Dart used to have separate compilers for Dart analyzer and Dart runtime
                                                                                            • Rust is in a similar situation with rustc&rust-analyzer

                                                                                            Counter examples:

                                                                                            • clangd (c++) and Merlin (ocaml) are evolutions of batch compilers. My hypothesis is that for languages with forward declarations & header files you actually can more or less re-use batch compiler.

                                                                                            Non-counter examples:

                                                                                            • Kotlin and TypeScipt started IDE-first.

                                                                                            If I try to generalize from this observation, I get the following. Large systems are organized according to a particular “core architecture” — an informal notion about the data that the system deals with, and specific flows and transformation of the data. This core architecture is reified by a big pile of code which gets written over a long time.

                                                                                            You may find that the code works bad (bugs, adding a feature takes ages, some things seem impossible to do, etc) for two different reasons:

                                                                                            • either the code is just bad
                                                                                            • or the core architecture is wrong

                                                                                            The first case is usually amenable to incremental refactoring (triage issues, add tests, loop { de-abstract, tease-apart, deduplicate }). The second case I think often necessitates a rewrite. The rewrite ideally should be able to re-use components between to systems, but, sadly, the nature of core architecture is that its assumptions permeate all components.

                                                                                            For compiler, you typically start with “static world, compilation unit-at-a-time, dependencies are pre-compiled, output is a single artifact, primary metric is throughput” (zig is different a bit I believe ;) ), but for ide you want “dynamic, changing world, all CUs together, deps are analyzed on-demand, bits of output are queried on demand, primary metric is latency”.

                                                                                            It does seem that “bad code” is a much more common for grief than “ill-fit core architecture” though.

                                                                                            1. 4

                                                                                              As I heard it, Clang started out because Apple’s Xcode team had reached the limits of being able to use GCC in an IDE, and they wanted a new C/C++ compiler that was more amenable to their needs. (They couldn’t have brought any of GCC’s code into the Xcode process because of GPL contagion.) So while Clang may run as a process, the code behind it (all that LLVM stuff) can be used in-process by an IDE for interactive use.

                                                                                              1. 2

                                                                                                What do you mean by “GPL contagion”?

                                                                                                1. 1

                                                                                                  The GPL is a viral license.

                                                                                                  1. 1

                                                                                                    Oh wow, yikes. Thanks.

                                                                                                  2. 1

                                                                                                    That if they had linked or loaded any part of GCC into Xcode, the license would have infected their own code and they would have had to release Xcode under the GPL.

                                                                                                  3. 2

                                                                                                    It’s interesting that the clang tooling has evolved in a direction that would avoid these problems even if clang were GPL’d. The libclang interfaces are linked directly to clang’s AST and so when you link against libclang you pull in a load of clang and LLVM libraries and must comply with their license. In contrast, clangd is a separate process and talks via a well-documented interface to the IDE and so even if it were AGPLv3, it would have no impact on the license of XCode.

                                                                                                  4. 3

                                                                                                    Thanks for the counter examples, those are really interesting!

                                                                                                    I’ve been able to make changes to the core architecture of my engine incrementally on a few occasions. Some examples:

                                                                                                    • I started out with a 2D renderer, but replaced it with a 3D renderer
                                                                                                      • (adding a dimensions sounds easy in theory, but the way 3d renderers are deisgned is very different from how 2d renderers are designed! lots of data structures had to change and tradeoffs had to be adjusted.)
                                                                                                    • I started out with hard coded character motion, but transitioned to a physics engine
                                                                                                    • I started out with a flat entity hierarchy, and transitioned to an tree structure
                                                                                                    • I started out with bespoke entities, and transitioned to an entity system

                                                                                                    These were definitely challenging transitions to make as my existing code base had a lot of hidden assumptions about things working the way they originally did, so to make it easier I broke the transitions into steps. I’m not sure I was 100% disciplined about this every time, but this was roughly my approach:

                                                                                                    1. Consider what I would have originally built if I was planning on eventually making this transition
                                                                                                    2. Transition my current implementation to that
                                                                                                    3. Make the smallest transition possible from that to something that just barely starts to satisfy the constraints of the new architecture I’m trying to adopt
                                                                                                    4. Actually polish the new thing/get it to the state I want it to be in

                                                                                                    It would be interesting to see retrospectives on projects where people concluded that this approach wasn’t possible or worthwhile and why. There could be more subtitles I’m not currently identifying that differentiate the above transitions from, e.g., what motivated Roslyn.

                                                                                                    1. 2

                                                                                                      This is super interesting, do you know of any materials on how to design an IDE first compiler?

                                                                                                      1. 3

                                                                                                        The canonical video is https://channel9.msdn.com/Blogs/Seth-Juarez/Anders-Hejlsberg-on-Modern-Compiler-Construction, but, IIRC, it doesn’t actually discuss how you’d do it.

                                                                                                        This post of mine (and a bunch of links in the end) is probably the best starting point to learn about overall architecture specifics:

                                                                                                        https://rust-analyzer.github.io/blog/2020/07/20/three-architectures-for-responsive-ide.html

                                                                                                      2. 1

                                                                                                        Did TypeScript start out “IDE-first”? I remember the tsc batch compiler being introduced at the same time as IDE support.

                                                                                                      1. 6

                                                                                                        The solution is widespread legislation that makes using people’s personal data for targeted advertising illegal or very expensive. (This is not limited to Gemini. A great many influential Internet people are convinced politics is utterly broken, so “technical solutions” are all that’s left).

                                                                                                        I don’t disagree with this, but I don’t really see Gemini as a “solution” to any political problem, nor do I have any reason to believe it was conceived as such. Rather, it is a space in which one can choose to go to opt out of the modern web – it’s a subculture, a niche, not trying to take over anything. Using Gemini in 202X is very different from using the web / gopher in 199X, the social and technological conditions are completely different. To use Gemini is to consciously reject the modern web and its trajectory – there is no money in it, no power, no real reason to use it except out of curiosity and interest. So much of social media is about metrics, engagement, advancing your career, etc – here is a space where you can explicitly reject that.

                                                                                                        https://alex.flounder.online/gemlog/2021-01-08-useless.gmi

                                                                                                        Gemini’s killer feature is that its extreme simplicity means that you can do these things with complete independence, knowing that every piece of software (client, server) is free software, easily replaceable, and created to serve the interests of the community, with no ulterior profit motive. 1 person working alone could write a basic client and/or server in a weekend, which means that production of the software ecosystem doesn’t have to be centralized. Again, I want to clarify – this isn’t to say this is the only way that software should be written, but it is allowing a space to exist that is genuinely novel and interesting.

                                                                                                        Many Gemini users are CS students in college, who are young enough to not directly experience the web as it was before “web 2.0”. Gemini is not a return to web 1.0, but a revitalization of something that was lost in the web 1->2 transition.

                                                                                                        proportionally even more white dudes.

                                                                                                        I run https://flounder.online (gemini://flounder.online), and I haven’t done a demographic survey, but from reading people’s self-descriptions on their pages, I have no reason to believe it is less diverse than tech spaces like this forum, GitHub, etc.

                                                                                                        1. 7

                                                                                                          I came here to write this. I wrote Gemini off for many of the reasons @gerikson did at first, but after actually using it, I came to realise the value wasn’t necessarily in the protocol or markup tradeoffs (which I have mixed feelings about, as a user and implementer), but in the subculture that’s developed there. I use Gemini every day, for a few different reasons, and what’s there is lovely to me.

                                                                                                          1. 3

                                                                                                            Note that the article used “proportionally even more white dudes” to describe “the halcyon days of the Internet” as compared to “today’s internet”. It wasn’t saying the Gemini community has proportionally more white dudes.

                                                                                                            1. 2

                                                                                                              My bad, I slightly misread that paragraph.

                                                                                                          1. 2

                                                                                                            The ‘BTDT’ in the title stands for Been There, Done That. (Took me a minute.)

                                                                                                            1. 11

                                                                                                              The recommendation in the article’s conclusion’s depends on a certain assumption, but I’d like to note that this assumption may not be necessary. The conclusion:

                                                                                                              If you want to use an online password manager, I would recommend using the one already built into your browser.

                                                                                                              Why would you want to use an online password manager – that is, one built into your browser? By using a non-browser-based password manager, you could gain password management features such as storage of non-website passwords and storage of free-form notes with each entry while sacrificing very little convenience.

                                                                                                              The article’s introduction mentions some offline password managers such as KeePass, KeePassX, and pass. On macOS, I personally prefer KeePassXC, a successor to KeePassX.

                                                                                                              With KeePassXC, my password is not auto-filled when I visit a login page. (Perhaps KeePassXC’s browser extension has this feature, but I avoided installing it due to the principle of least privilege.) However, I can still use KeePassXC’s “auto-type” feature to simulate keyboard entry of my username and password in one go. I can also copy the username and password to the clipboard individually for pasting. The one downside to these methods is that the password manager will not warn me if I am trying to type the password into a phishing site – I have to be sure to first visit the site through a trusted bookmark or the link in the password entry.

                                                                                                              Note that that “non-browser-based” doesn’t mean you will be forced to rely on one device to look up your passwords. You can use an online file sync service – a proprietary one like Dropbox or Google Drive or an open-source one like Syncthing or ownCloud – to make your password database available on multiple devices. I use this strategy to access my KeePassXC password database on both my laptop and my phone.

                                                                                                              As your password database is encrypted at rest, online syncing requires only trusting your file sync service to not leak your files to anyone who would spend time brute-forcing your password. I find that trust easier to give than trusting a browser-based password management company to both not leak my encrypted password to their many attackers and to not serve me a version of the software with encryption disabled.

                                                                                                              If you use password manager to share credentials among multiple users, you could still use a non-browser-based password manager plus a file sync service, but it’s less suited for that use-case. If multiple users add a password to the database at the same time, one of the users will have to manually resolve the conflict.

                                                                                                              1. 2

                                                                                                                This sounds like a decent middle ground between comfort and security. You might also consider hosting your password manager yourself. Bitwarden, which I use, is open source and has multiple server implementations. And because of the way bitwardens client - server communication protocol works, I don’t have to trust my hosting provider to not read my data.

                                                                                                              1. 2

                                                                                                                Warning: the title is somewhat misleading. This post describes a single example of a leaky abstraction: on Windows, cutting and pasting files from a ZIP file is much slower than copying and pasting those files. The post goes into detail about why Windows’s implementation of this ZIP file operation might be slow. However, the post does not discuss leaky abstractions in general.

                                                                                                                1. 5

                                                                                                                  Has anyone else seen that first screenshot before?!

                                                                                                                  On the one hand, it’s utterly obnoxious behavior from Apple.

                                                                                                                  On the other hand, it’s a pretty niche corner-case to justify “throw it all away and start afresh.”

                                                                                                                  I’m also now noticing the irony of company A which uses its status to get people to do things (register) commenting on company B which uses its status to get people to do things (delete their own files). The desktop metaphor doesn’t feel like the biggest problem in this picture.

                                                                                                                  1. 4

                                                                                                                    It’s very easy to fix that — go into System Preferences, click General, and turn on the setting for reopening documents. (I’m not saying this is obvious, or that it’s the right UX, just that a solution exists.)

                                                                                                                    1. 2

                                                                                                                      Specifically, these are the related settings in System Preference > General (in macOS 10.14):

                                                                                                                      • [ ] Ask to keep changes when closing documents
                                                                                                                      • [ ] Close windows when quitting an app
                                                                                                                        • When selected, open documents and windows will not be restored when you re-open an app.