1. 15

    Does “master” in the context of git mean master like in “master and slave”, like “master bedroom” or like “come here young master!”? I’m not a native English speaker, but it’s a word with multiple meanings, right?

    1. 12

      It is indeed a word with multiple meanings. But the people who initially developed git had previously used a tool called bitkeeper, and were inspired by its workflow. It used the term master like in “master and slave”.

      https://github.com/bitkeeper-scm/bitkeeper/blob/master/doc/HOWTO.ask#L223

      So the most benign explanation is that git used the term in the same sense.

      1. 17

        And until people started talking about it recently, if you asked anyone what the “master” branch meant, they didn’t give that answer – they thought it meant like “master copy”.

        So in a very real sense, the people promoting this explanation are actually creating an association that did not exist in people’s minds before, and incurring stereotype threat that did not need to exist.

        I understand the sentiment, but I think this has negative utility overall.

        1. 2

          I don’t think any of the meanings commonly assigned to “master” really fit git’s usage. The only explanation I have heard that I find particularly satisfying is that people accustomed to bitkeeper adopted a familiar term.

          It’s not really like a “master copy” or a “master key”. Nor does it control anything, which is usually the sense for “master/slave”. I expect if they had been working without the context of BK, it’d have been called “primary” “default” or “main” in all likelihood. I think giving it a clearer name rather than continuing to overload the poorly chosen term “master” has some small utility, as long as it doesn’t break too much tooling too badly.

          And I think a much more interesting question about git is whether Larry McVoy still thinks it was a good move to spawn its creation by revoking the kernel developers’ license to use it on account of Tridge’s reverse engineering efforts.

          1. 6

            I think it means master copy in that all branches come back to it when finished. So at any given point in time it has the most finished, most production, most merged copy.

            Like if you are mixing a song and you put all the tracks together into a master copy. That’s like bringing all the branches together and tagging a release on master.

            If anything, git branches are in no way “slave” or “secondary,” just works in progress that will eventually make it into master, if they are good enough.

            That’s at least how I understood it.

            1. 1

              I certainly would have no argument with using main, default, or primary if creating a new system. It would be a little more descriptive, which is good. I don’t think it’s better enough to upset a convention, though.

              (One argument against master/slave in disk and database terminology, besides the obvious and very valid societal one, is that it can be terribly misleading and isn’t a good description.)

          2. 9

            But git never adopted the concept of master as in the meaning “master/slave” only in the meaning “master branch” (like “master key”), right?

            1. 3

              I thought it was more in reference to “master copy” akin to an audio recording.

              1. 1

                I don’t recall any usage of the word slave in git.

                I find it impossible to say, though, because none of the meanings you listed are really a good fit for git’s usage of the term. I think the only answer is that it was familiar from BK.

                Something like “main” or “primary” would better match the way it gets used in git.

                1. 2

                  Or something like “tip” or “trunk”…

                  yep I’m using trunk in new projects now, as an SVN reference :D

                  1. 1

                    Heh, i’m tempted to use attic to be even more contrarian then.

                    1. 1

                      I would encourage main simply because it autocompletes the same for the first two characters. :-)

                      1. 1

                        The “headhoncho” branch it is.

                2. 10

                  There is some further discussion about this on the GNOME desktop-devel mailing list. Petr Baudis, who was the first to use “master” in the git content had intended it in the “master recording” sense.

                  Edit: Added additional link and removed “Apparently”

                  1. 17

                    Yes, for example Master’s Degree

                    1. 2

                      But it’s the dumbest branch. It knows less than other active branches, it only eventually collects the products of the work on other branches.

                      Of all the meanings of master, I can only think of one where this analogy applies.

                      It also doesn’t “do everything” like a master key, it does the same thing as all the active branches, or one thing less if a feature is completed on that branch. Code in the master branch should be the most active, so it’s not a bedroom. It’s the parent of all the others so it’s not a young master.

                      It’s a boss, a leader, a main, or indeed a slave master. Any of these analogies would fit.

                      1. 9

                        Master in git doesn’t mean master like any of those things. It means finished product, master. The exact same way it’s used in media, for example, when a “remastered” song is released.

                        1. 6

                          Gold master.

                      2. 6

                        Arguably language changes with usage, but in this case if you look at where the word came from, Git is based in many ways on Bitkeeper, which had master and slave both, so it would fall into the first category.

                        1. 15

                          But surely git isn’t using the word with that neaning, since there are no slave brances? Or?

                          1. 8

                            That’s how I feel about it, but apparently others disagree.

                            1. -1

                              Git was made by Linus Torvalds. If you know anything about the guy, you’d know that the only human aspects he takes into consideration is efficiency of tool use. Having named slaves is more useful, and once they have name there’s no reason to call them that anymore.

                              1. 8

                                Linus didn’t introduce the master branch concept, that was a dude named Petr ‘Pasky’ Baudis. He recently clarified that he intended to use it in the sense of ‘master copy’, but he may have been influenced by ‘master-slave’ terminology.

                            2. 3

                              Thinking in terms of what the authors themselves meant at the time, and whether or not the word “slave” is explicitly stated is a pretty limiting framing of the issue IMO. In reality, people react negatively to using metaphors of human dominance to describe everyday tools.

                              1. 3

                                In reality, git’s use of master has not resulted in a preponderance of negative reactions.

                                It’s used millions (billion?) of times a day with neutral to positive reactions, I expect.

                                I would like to see this empirically validated, but I think “In reality, people react negatively to using metaphors of human dominance to describe everyday tools.” is unverified at best and probably false.

                                1. 1

                                  You can either argue the need for some sort of empirical sociological analysis of the quantity of people bothered vs not bothered by the word “master” to gauge the importance of the topic, or you can make your own anecdotal assertion as to how big or important the controversy is, but it’s not terribly consistent to advocate for both IMO.

                                  I make no claim as to the number of users bothered by “master”, and I certainly wouldn’t say it’s a “preponderance” of the userbase. But again IMO you’re further missing the point if you think that the broader issue has anything to do with the particular size of the anti-“master” crowd. The fact is, if you’ve followed recent online discussion on the topic, you’ll have noticed that there’s clearly some number of users that would prefer for their main branch not to be named “master”. Does it bother you if they choose an alternative name that doesn’t draw a metaphor - intentionally or not - to systems of human hierarchy and control?

                                  1. 2

                                    I certainly haven’t read all the discussion, but I feel I’ve read a decent amount and while there are some people, it doesn’t seem that large.

                                    For me, the issue seems to be whether there is any intended malice in the term. If not, then the individuals who are offended may want to reconsider being offended.

                                    I say this because it seems like a small amount. While I would like to know the actual level, I don’t think it’s reasonable for many people using a common, non-racist connotation of the term “master” to change because some people think that there’s a metaphor to human systems of hierarchy and control that wasn’t intended by the author and isn’t interpreted as such by the vast majority.

                                    Potential offense and misinterpretation doesn’t seem like a worthwhile level of effort. Mainly because people can be offended by all sorts of stuff. The three tabbers are upset with two and four, should we change to prevent offense?

                                    I would feel very differently if this was a racist term or the number of people offended was very large.

                                    Also, if someone chooses to make the change on their own project, it wouldn’t bother me at all. It’s their project, they can name master whatever they feel like.

                                    1. 1

                                      For me, the issue seems to be whether there is any intended malice in the term. If not, then the individuals who are offended may want to reconsider being offended.

                                      That’s highly unlikely to happen, I’m afraid.

                                      The parallel to draw is the re-branding of Uncle Ben’s and Aunt Jemima. Despite Aunt Jemima’s old marketing material looking quaint and “racist”, they were laudations of excellence in times when racism was much more rampant. Uncle Ben was a competent farmer, and no one knew as much about as cake as your (likely black female) housekeeper.

                                      Now with all that’s going on, those companies have not stated that case, but instead issued statements bending a knee to the masses. This is counter-productive to any minority cause, because it literally kills off appreciation if it happens.

                                      On the coder side of the fence, where master isn’t even misinterpreted marketing material from the late 1800s, but a technical detail, this will swiftly blow off. Like I believe the master/slave vocabulary pretty much blew over in networking and other contexts as well.

                                      Which is not to say another word than “slave” is inherently worse, but the effort put into churning a codebase to get rid of something that’s essentially a homonym combined with a neologism is simply wasteful.

                                      It’s marginally sad, in the sense it will incur some technical difficulties, that you can’t rely on the name of the master copy of the code in the Git repository. You could before.

                                      It’s also sad that there’s very little, or nothing I know, in common code vernacular that elevates minority achievements, but I un-ironically believe that such vernacular would be at risk of being labeled racist as well :(

                                      Wouldn’t expect too many “Oh yeah, I can see that!” type responses, because people tend to “Raise shields! Go to red alert!” (In Sir Patrick Stewart’s voice) when their view is challenged, and instead give some Off-topic or Troll downvotes, maybe a defensive reply, whenever this point of view is brought up.

                                      1.  

                                        it doesn’t seem that large

                                        “Large” is a pretty relative term, and I’m frankly not sure what the point is in continually attempting to size up the number of people who don’t like to call their main branch “master” is aside from trivializing their belief.

                                        For me, the issue seems to be whether there is any intended malice in the term. If not, then the individuals who are offended may want to reconsider being offended.

                                        That may be a framing you find useful in defending its use, but from what I’ve seen, this really isn’t how the anti-“master” crowd sees it. This suggests to me that you’ve either misunderstood objections to “master” terminology as somehow connected to intent, or that you wish to make it about intent in order to win the argument, since you know that the original namer was unlikely motivated by racist sentiment. By making it about intent, one makes it personal in that it’s now about the original namer vs the objector, and we’re quickly debating hot topics like cancel culture and the carceral logic of punishment and shaming. I don’t believe we have to go down that path.

                                        Along the same line, I think it’s a mistake to frame this as “offense” because “offense” implies a negative reaction to what the offended party perceives as a malicious act of intent. For example, I’m offended if I show up to a family meal and someone maliciously tries to feed me something I’m allergic to despite knowing about it, whereas I’m simply upset if I accidentally eat a peanut or something along those lines.

                                        I think a more positive framing, and one that feels more productive and truer to my reading of the anti-“master” view, is that they see the term as unnecessarily violent. Most of them are not here to inflict shame or make anyone feel bad; they’d simply prefer terminology that doesn’t fire off negative associations. One can reasonably hold a different association with the term, but notice how the debate is no longer so personal.

                                        1.  

                                          I think it’s part about intent and part about trivializing.

                                          It’s hard to trivialize incorporeal, nonspecific complaints. Perhaps if I knew the individuals, I would understand better. I’ve read the explanations and the complaints do not seem significant to me. That’s not quite trivializing, but it is assigning it a low priority.

                                          Aside from that, I think intent is very important because it’s important to understand the entire context of a situation before I pass judgement. Trying to assume someone’s intent is a recipe for grief and self-pain because it’s so hard to do that effectively.

                                          It’s much easier for me to ignore as much as possible that isn’t important. Just because someone else thinks it is important, doesn’t mean I have to acknowledge it and make it important to me.

                                          In this case, I cannot comprehend how someone thinks master, as git uses it, is unnecessarily violent in a way that doesn’t lay out a cogent argument. As such, I’d rather just move on to better things than try to understand the line of reasoning where someone thinks that language is violent. As I fear it would also lead to many other expressions that someone thinks is violent for tenuous reasons.

                                          I hope peace for people who are offended by appropriate uses of “master.” I hope that whatever trauma they experience can be healed ,individually, in a way that doesn’t externalize the pain onto others.

                                          1.  

                                            Whether people use master or main doesn’t affect me in the slightest, but clearly there are plenty of people who do care quite deeply. I’m perfectly happy to let them figure it out while I’m doing the things I care about.

                                            I’d rather just move on to better things than try to understand the line of reasoning where someone thinks that language is violent

                                            I know I have a tendency to get sucked into discussions I don’t need or want to be in. If you are like me in that regard, it’s worth reflecting on how deep this discussion thread has gotten and whether that’s a good use of energy.

                                  2. 1

                                    Just calling out that I have “-1 incorrect” and “-1 troll” on this reply. I wrote two sentences, the first qualified as opinion and the second is a good faith summary of the anti-“master” branch opinion. Please tell me how I’m trolling and what about this reply is incorrect.

                                2. 3

                                  Do words become taboo only due to their original meaning, or to their current meaning? Or both? What if I make up a false etymology to make a word sound bad, do you then have an obligation to stop using it?

                              1. 3

                                I’m not familiar with client-side web dev, can anyone explain why the author claims that .sql cannot be synchronous? Would it not just hang the page until the value is returned or is there something more complex going on?

                                1. 2

                                  Synchronous HTTP requests are deprecated … well, pretty much shunned … as they play havoc with the browser’s behaviour. It’s pretty tricky to wrap an asynchronous request up into a synchronous API, so the assumption here is that the synchronous HTTP option must be what’s being used.

                                  See: https://developer.mozilla.org/en-US/docs/Web/API/XMLHttpRequest/Synchronous_and_Asynchronous_Requests#Synchronous_request

                                  1. 1

                                    Browsers simply do not expose blocking APIs to javascript code. There isn’t a syscall (or equivalent) you can make that will block.

                                  1. 2

                                    So, why do you require a screen? why not VR? I kind of remember that stream had an application that let you access the terminal on VR. Is that not available?

                                    1. 6

                                      Personally a VR headset while lying down would give me a terrible neck ache - moving the neck while lying down is very hard work compared to being upright.

                                      Also: do not do this if you can avoid it. Many bodily systems (inc digestion) do not function well while lying down, and your muscles will start to waste within days. Lying down for 16 hours a day will cause you lasting harm within a couple of weeks.

                                      1.  

                                        ↑ This. Aside from dealing with an injury (like the author), I don’t see why anyone would completely rid their body of physical exercise. Sitting is bad as we all know but there are better ways to deal with that (a convertible sitting/standing desk + a pomodoro timer reminding me to take a break work wonders for me).

                                    1. 3

                                      My first thought is: People still use Rails?

                                      Of course when I think about it, there is that famous curve where you have a huge uptick in a lot of coverage of a thing, with relatively few users in the grand scheme of things (but it appears like everyone is using it depending on what news bubble you are in). That been quite a few years ago now, so it is probably fair to say that Rails is quite mature at this point.

                                      The only thing I don’t know: this the mature phase where usage increases slowly, or where usage decreases such that it eventually disappears?

                                      As an aside, with the link to the Rails Doctrine I find it interesting to note when it talks about “Rails Migration of 2.x to 3” being painful, leaving a lot of people behind, and souring of others - it has a version number inline with python…

                                      This article should be written for every framework or library :-).

                                      1. 9

                                        Notably, this site is written in Rails.

                                        1. 1

                                          That is interesting actually. I wouldn’t have excepted that.

                                        2. 5

                                          I’m still finding rails astonishingly productive if your team can handle the (significant!) ops overhead.

                                          So many little details that work well.

                                          1. 1

                                            That is probably one of the things that kept me from ever bothering with it TBH. I find stuff like asp.net more straightforward.

                                            1. 1

                                              It really depends on how much programming work is going to be required. If you’re planning to spend 6 weeks building and 6 years operating a site, rails doesn’t really make sense. If you expect to continually have multiple programmers work on the site, the operational overhead is negligible compared to the productivity advantages.

                                              1. 1

                                                That’s a very strong claim to make.

                                                Maybe in 2005 when RoR was considered cutting edge. But today?

                                                1. 1

                                                  I’m honestly unclear which direction you think it’s strong in (I have heard much stronger opinions in both directions).

                                                  For context, I have used rails for paid work during every calendar year since 2008, though some years only had a little bit of it. In that time, I’ve held various roles including the primary on-call, lead developer, devops setup person, and also run several sites using various tech stacks.

                                                  Rails sites require a very different level of devops work to, say, golang. It’s typical for a rails app to require:

                                                  • Multiple server roles (web, worker, periodic task trigger)
                                                  • Multiple key/value stores (redis for sidekiq, memcached for caching)
                                                  • A full webpack configuration
                                                  • A relational database
                                                  • Multiple third party APIs which don’t come with backoff configured (eg mailer, captcha)
                                        1. 7

                                          Having worked on a lot of Rails codebases and teams, big and small, I think this article is pretty spot on. Once your codebase or team is big enough that you find yourself getting surprised by things like

                                          counters = %i(one two three four five)
                                          counters.each do |counter|
                                            define_method("do_#{counter}_things") do |*args|
                                              # whatever horrible things happen here
                                            end
                                          end
                                          

                                          etc… you’ve outgrown rails.

                                          1. 7

                                            This is my litmus test for “has this person had to maintain code they wrote years ago”.

                                            I don’t think I’ve yet worked with anyone who can answer yes but also wants me to maintain code that can’t be found via grep.

                                            1. 3

                                              What unholy beast is that. I mean. Seriously. Wtf is that?

                                              1. 4

                                                It’s gruesome, and I’ve seen a version of it (using define_method in a loop interpolating symbols into strings for completely ungreppable results) in at least 3 different large old codebases, where “large” is “50KLOC+” and “old” is “5+ years production”

                                                There are a lot of ways to write unmaintainable code in a lot of languages/frameworks, but if I ever were to take a Rails job again, I would specifically ask to grep the codebase for define_method and look for this prior to taking the job. It’s such a smell.

                                                1. 2

                                                  I don’t understand why it’s so normalized in rails to define methods in the fly. You could do that monstrosity easily in most interpreted languages, but there’s a reason people don’t! On Rails, it’s just normal.

                                                  1. 4

                                                    It’s been a long time since I’ve read the ActiveRecord source so it may no longer be this way, but there was a lot of this style of metaprogramming in the early versions (Rails 1.x and 2.x) of the Rails source, ActiveRecord in particular, and I think it influenced a lot of the early adopters and sort of became idiomatic.

                                              2. 1

                                                Who the fuck writes code like this?

                                                shudders

                                                1. 1

                                                  The time between “just discovered ruby lets you do that” and “just realized why I shouldn’t” varies from person to person; I’ve seen it last anywhere between a week and a year.

                                              1. 3

                                                Stop supporting and embracing Electron apps, please.

                                                1. 4

                                                  Serious question: what’s wrong with Electron apps?

                                                  1. 15

                                                    As someone who just spent a little time attempting a port of an Electron app to FreeBSD, only to quit in disgust, I have a few opinions.

                                                    1. Electron apps are huge. Really, really, really big with a gigantic web of dependencies. Think an 18,408 line Yarn lockfile.

                                                    2. Those dependencies are JavaScript libraries. To put it mildly, there is not a large intersection between the JavaScript community and users of non-mainstream OSs (e.g. FreeBSD). And those libraries tend not to be written in a portable fashion. This example (admittedly from a few years ago now) of a library disregarding $PATH is just one.

                                                    3. Platform support in Electron is a gigantic steaming pile of bogosity based upon the wrong set of abstractions. Instead of learning from the autotools people who were doing this decades ago, they detect platforms, not features. So when a new platform comes along (say, FreeBSD) you can’t just specify which features it has and let it compile. No, you have to create a gigantic patch that touches a bazillion files, everywhere those files check for which platform it’s compiling on.

                                                    4. Once compiled and running, they’re still huge (up to 1GiB of RAM for an IM client!). And - although perhaps this is a reflection of the apps themselves, not the framework - many are sluggish as hell. Neither is an attractive prospect for resource-limited Linux machines, like PinePhones.

                                                    I had thought, prior to attempting a port of an Electron app, that people were unfairly criticizing it. Now having peeked under the covers, I don’t think people are criticizing it enough.

                                                    1. 6

                                                      As someone who isn’t an Electron hater: Electron apps are slow to load and memory hogs, which is something you might live with if you are talking about your IDE or Slack, but starts getting really old when it’s a utility application that should load quickly or spends most of the time in your icon tray. Worse yet: poorly written Electron apps can become CPU hogs as well, but I guess the same goes for all software.

                                                      1. 3

                                                        I agree that lots of Electron apps have issues with poor performance and high memory usage. That said, a well written Electron app can perform well. For example, I’m a heavy user of the Joplin desktop application and in my experience it performs well and has fairly low memory usage (currently under about 200MB) and doesn’t seem to have the issues that plague the Slack client. Admittedly the Slack client is a lot more complex…

                                                        1. 2

                                                          Oh, I agree that there great, performant Electron apps. VSCode is one of my favorite examples of that. Spotify is another one.

                                                          One of my biggest gripes with Electron is that - because of the nature of how it’s embedded in binaries - you usually end up with with several full copies of the whole framework in memory. If you are using KDE or Gnome, most of the processes in your desktop are sharing a significant amount of memory in the form of shared libraries. This tends to be fine in systems with 16Gb+ of memory and a fast CPU, but for people with more meager resources… it’s a drag.

                                                        2. 2

                                                          I’m sure performance issues will be addressed in time.

                                                          1. 13

                                                            Electron has been around since 2013 and still, typing in Slack still has a noticeable latency (that drives me crazy). I also still have to restart it once a day or so, to avoid that it becomes more and more laggy.

                                                            In the meanwhile, ripcord was developed by a single indie developer in Qt. Has most of Slack’s functionality, only uses a fraction of the memory, and is lightning fast. Oh, and it is multi-platform.

                                                            People (not you) claim that it is only possible to write cross-platform applications in Electron. This is nothing further from the truth, people have been writing cross-platforms apps in Qt literally for decades. (And it’s not hard either.)

                                                            1. 2

                                                              I’m not sure that I would consider Slack a stellar example of an Electron app. Slack is slow even by Electron standards. VS Code’s latency is indistinguishable from typing in the Lobsters comment in Chromium on my middle-of-the-road desktop machine. Discord is a much better Electron-based chat app from a performance standpoint, in my experience.

                                                              People (not you) claim that it is only possible to write cross-platform applications in Electron. This is nothing further from the truth, people have been writing cross-platforms apps in Qt literally for decades. (And it’s not hard either.)

                                                              For commercial software, the more important part is not whether it’s possible (or “hard”), but whether it’s commercially viable. Without any hard data one way or another, I’d say that writing Electron apps is much less expensive than writing native Qt apps for most companies (especially since web technology experience is much easier to come by).

                                                              1. 1

                                                                I don’t mind electron, but even VS code drops 1-2 frames on keypress on my threadripper desktop (and Chrome/Firefox do not). So far I’m putting up with it for the language server integration.

                                                            2. 4

                                                              I’m sure once the performance issues are addressed the complaints about performance issues will subside.

                                                              1. 1

                                                                I’m looking forward to the day that systems like Electron will compile everything to WebAssembly as a build step. In a way, I think Gary Bernhardt might have been more correct than I gave him credit for in his famous The Birth & Death of JavaScript presentation.

                                                            3. 3

                                                              There are the utilitarian critiques (they are big and slow) and there’s also the sort of Mac critique (they are not in any way native) and there’s my weird “I HATE THE WEB” critique that is probably not widely shared. I have a couple of them that I use daily, but I really, really, really wish I didn’t.

                                                          1. 4

                                                            That’s rather sad. Google giveth, Google taketh.

                                                            On the other hand, maybe it’s time to host a community service.

                                                            Or just have each hosting run their own godoc. The question then would be, how do you learn what godoc instance belongs to a specific source code hosting, so that you can traverse the dependency graph.

                                                            1. 6

                                                              Back when I was using “clone & check your dependencies in per-project” for package management (like much else in go-land, it was a hassle but made many things very simple & reliable), running a local godoc was amazing. You got docs for the exact versions of every package you had installed, and nothing else, and every page rendered instantly.

                                                              1. 1

                                                                You got docs for the exact versions of every package you had installed, and nothing else, and every page rendered instantly.

                                                                I think this did not change, just run godoc inside a Go modules project and you will see the exact same thing.

                                                                Back when I was using “clone & check your dependencies in per-project” for package management..

                                                                This got easier with Go modules as well, just run go mod vendor to setup the /vendor directory.

                                                                1. 1

                                                                  Godoc may well have learned to respect go modules.

                                                                  It did not support it at the time go modules became the official recommendation (nor for many months afterwards), which I found to be a bothersome oversight.

                                                              2. 5

                                                                On the other hand, maybe it’s time to host a community service.

                                                                godoc.org actually started out as a community project, and was moved to the golang organisation in 2014.

                                                              1. 12

                                                                I’d be really hesitant to link a Slack thread in a commit message. Even permissions aside, Slack limits the history access on some plans so this link may soon become invalid.

                                                                1. 4

                                                                  I prefer analysis and notes go in a bug tracker for a variety of reasons (e.g., you can amend it later if URLs change or to make it clear the analysis was somehow incorrect) – but, yes, whether in bug notes or in the commit message itself, it definitely feels better to paraphrase, summarise, or even just copy and paste the salient details rather than just link to a discussion in some other potentially ephemeral mechanism or something volatile like a gist.

                                                                  1. 1

                                                                    Agreed, IMHO details and discussion should all be in the bug tracker - not in Slack or in commit messages. The commit message should have a short ”what does this change do” explanation and a reference to the bug tracker where more details can be found if needed. I don’t agree with the need to put an entire essay in the commit message that seems to be popular on the Internet.

                                                                    1. 16

                                                                      My 12-year-old codebase is on it’s fourth bug tracker. None of the links in commit messages still work, but the repository kept history when it got converted to git, so 12-year-old commit messages still carry useful context.

                                                                      1. 2

                                                                        As other comments mention, commit history has a longer lifetime than bug trackers. Of course, you can port your tickets when you change trackers, but will you?

                                                                        1. 2

                                                                          Yes, of course. Not migrating your hard earned knowledge would be an incredible destruction of time and money, especially in a commercial setting.

                                                                          1. 1

                                                                            Commits are for non commercial settings as well, and sometimes migration can’t be done e.g. when you rehome but don’t move a GitHub repository (if for example original access controls were vested in single humans who then apparently vanished).

                                                                            Keeping content in an issue tracker is nice, but it’s always worth duplicating knowledge into the git commit.

                                                                            Catastrophic data loss can always happen, even in commercial settings.

                                                                      2. 1

                                                                        I mostly agree here. But I do think that links can belong in a commit message, like when referencing a design document. Since design docs are usually a snapshot of a design of a feature, that usually has context on motivations for some of the decisions that are too long to type out in a commit message, as well as discussions.

                                                                      3. 1

                                                                        Interesting point. It’s a trade-off between providing context easily and future proofing. I was assuming a paid plan, which has no historical limits. I don’t think free slack is a good fit for this, because of the memory hole.

                                                                        1. 2

                                                                          I’m fine with linking to a fully fledged bugtracker we know we’ll keep using (yeah…) but something like Slack feels far to flimsy to me. It’s not clear where the context begins and where it ends as the discussion often evolves and drifts away to other related topics. A chat platform just isn’t a fit here in my opinion.

                                                                      1. 8

                                                                        I first ran across this concept around two years ago in Hasura (ACL* docs], which uses table and row-based ACL. At the time I was working on a Rails app and I was surprised by how much we were doing in the application code that could actually be pushed to the database level.

                                                                        I recently interviewed at quite a few companies and only one of them had any interest in discussing using DB ACL* features. There’s no bias against it, just not a lot of knowledge about how to set it up and where it becomes difficult.

                                                                        After spending some time reading about setting up database level ACLs and thinking about it, I think the issue that holds back a lot of these use cases is the opacity of seeing which options are set. Unlike with source-code where you can read the code without running it, you need to access an up-to-date instance of the database in order to find out how it’s configured.

                                                                        This is just a bit too different from how we’re used to working and that small change probably makes people investing in building startups reluctant to embrace these changes, because they’re perceived as an eventual bottleneck to scaling.

                                                                        For FAANG and Fortune500 companies, I think the main reason they don’t implement controls at the DB level is that they’ve already solved them in the application code and there’s a natural level of inertia to re-writing a solved problem.

                                                                        * ACL stands for Access control list

                                                                        1. 4

                                                                          After spending some time reading about setting up database level ACLs and thinking about it, I think the issue that holds back a lot of these use cases is the opacity of seeing which options are set. Unlike with source-code where you can read the code without running it, you need to access an up-to-date instance of the database in order to find out how it’s configured.

                                                                          Do you think it would help if the DB forced changes to these options to go through the config file? It would be like nginx: to change a setting at runtime, you edit the config and reload.

                                                                          I wonder how far you could take this idea. Would it make sense for the DB to require all DDL to go through the config file?

                                                                          1. 1

                                                                            That’s a great idea! I think it fits well with the broader movement to “infrastructure-as-code” and for me at least, the database is much more infrastructure than code. I think this is a bit how Prisma’s data modeling works.

                                                                            One of the problems I’ve had reasoning about databases in the past is that often I find myself reading through migration logs, which is the equivalent to reading through git .patch files to find the current state of your codebase.

                                                                            One of the biggest benefits I found from using Hasura for a couple projects was that it gave me a UI to view my current database schema at a glance.

                                                                            1. 1

                                                                              “Developers need to know the DB schema of production” is not a super hard problem to solve in a secure way, but most solutions in use are either insecure or subject to drift.

                                                                              A nightly job that dumps the schema somewhere accessible might fail a particularly tight security audit, but I suspect most would let it pass after giving it a close look.

                                                                          1. 22

                                                                            I agree lots of people don’t, because they never even bother to learn anything past GRANT ALL…

                                                                            So, who out there has used these features?

                                                                            We use PG’s row-level security exclusively. It is 100% worth the introduction pain. Every user of our software has their own PG DB login, and that’s how they login to the application.

                                                                            How did that impact your application/environment?

                                                                            The application does only 1 thing with access control, what shows up on the menus(well and logging in). 100% of the rest of the app is done via PG RLS, and the app code is a bunch of select * from employees; kind of thing.

                                                                            What have you used them for?

                                                                            Everything, always! :) lol. (also see next answer)

                                                                            Do they provide an expected benefit or are they more trouble than they’re worth?

                                                                            When we got a request to do a bunch of reporting stuff, we connected Excel to PG, had them login with their user/password to the PG DB, and they were off and running. If the user knows SQL, we just hand them the host name of the PG server, and let them go to town, they can’t see anything more than the application gives them anyway.

                                                                            When we added Metabase, for even more reporting, we had to work hard, and added a reporting schema, then created some views, and metabase handles the authorization, it sucks. Metabase overall is great, but It’s really sad there isn’t anything in reporting land that will take advantage of RLS.

                                                                            How did you decide to use them?

                                                                            When we were designing the application, PG was just getting RLS, we tried it out, and was like, holy cow.. why try to create our own, when PG did all the work for us!

                                                                            Trying to get access control right in an application is miserable.

                                                                            Put permissions with the data, you won’t be sorry.

                                                                            1. 6

                                                                              Doesn’t this require a lot of open connections? IME, postgres starts to struggle past a couple of hundred open connections. Have you run into that at all?

                                                                              1. 5

                                                                                If you run everything inside of transactions you can do some cleverness to set variables that the RLS checks can refer to, emulating lots of users but without requiring more connections.

                                                                                1. 2

                                                                                  See my other comment, but you don’t have to work quite that hard, PG has a way now to become another user.

                                                                                  see: https://www.postgresql.org/docs/12/sql-set-session-authorization.html

                                                                                  1. 2

                                                                                    How many users does this support? I might be being overly cautious, but we have been told to look at user counts in the millions.

                                                                                    1. 2

                                                                                      We are an internal staff application, we max around 100 live open DB connections, so several hundred users. This runs in a stable VM with 32GB ram and < 1TB of data. We will never be a Google or a Facebook.

                                                                                      One can get really far by throwing hardware at the problem, and PG can run on pretty big hardware, but even then, there is a max point. Generally I recommend not optimizing much at all for being Google size, until you start running into Google sized problems.

                                                                                      Getting millions of active users out of a single PG node would be hard to do, regardless of anything else.

                                                                                2. 2

                                                                                  In our experience, the struggle is around memory, PG connections take up some memory, and you have to account for that. I don’t remember the amount per connection, but this is what I remember.

                                                                                  It’s not entirely trivial, but you can re-use connections. You authenticate as a superuser(or equivalent) send AUTH or something like that after you connect, to lazy to go look up the details.

                                                                                  We don’t currently go over about 100 or so open active connections and have no issues, but we do use pgbouncer for the web version of our application, where most users live.

                                                                                  EDIT: it’s not AUTH but almost as easy, see: https://www.postgresql.org/docs/12/sql-set-session-authorization.html

                                                                                3. 3

                                                                                  How do RLS policies impact performance? The Postgres manual describes policies as queries that are evaluated on every returned row. In practice, does that impact performance noticeably? Were there gotchas that you discovered and had to work around?

                                                                                  1. 3

                                                                                    Heavily.

                                                                                    It is important to keep your policies as simple as possible. E.g. if you mark your is_admin() as VOLATILE instead of STABLE, PG is going to happily call it for every single row, completely destroying performance. EXPLAIN is your best friend.

                                                                                    But even then, some queries are performed needlessly. Imagine you use transitive ownership. For example Users own Boxes, Boxes contain Widgets. When you want to determine what Widgets can User manipulate, you usually cache the User-Boxes set at the application server level and query “downwards”. With RLS, you need to establish a link between the Widget and User, joining over Boxes “upwards” as there is no cache.

                                                                                    The real problem here is that with sufficiently large schema, the tools are lacking. It’s really inconvenient to develop within pgAdmin4, away from git, basically within a “live” system with its object dependencies and so on.

                                                                                    1. 2

                                                                                      It can, as I mentioned in my other comment in this thread, we have only run into a few instances where performance was an issue we had to do something about.

                                                                                      As for tools, We use liquibase[0], and our schema’s are in git, just like everything else.

                                                                                      0: https://www.liquibase.org/

                                                                                      1. 1

                                                                                        I’ll check it out.

                                                                                        1. 1

                                                                                          How does the Liquibase experience compare to something like Alembic or Django migrations?

                                                                                          The main difference I see is whether your migrations tool is more tightly coupled to your app layer or persistence layer.

                                                                                          With Alembic you write migration modules as imperative Python code using the SQL Alchemy API. It can suggest migrations by inspecting your app’s SQL Alchemy metadata and comparing it to the database state, but these suggestions generally need refinement. Liquibase appears to use imperative changesets that do basically the same thing, but in a variety of file formats.

                                                                                          1. 2

                                                                                            I’m not very familiar with alembic or django migrations. Liquibase(LB) has been around a long time, it was pretty much the only thing doing schema in VCS back when we started using it.

                                                                                            Your overview tracks with my understanding of those. I agree LB doesn’t really care about the file format, you can pick whatever suits you best.

                                                                                            The LB workflow is pretty much:

                                                                                            • Figure out the structure of the change you want in your brain, or via messing around with a development DB.

                                                                                            • Open your favourite editor, type in that structure change into your preferred file format.

                                                                                            • Run LB against a test DB to ensure it’s all good, and you didn’t mess anything up.

                                                                                            • Run LB against your prod DB.

                                                                                            • Go back to doing whatever you were doing.

                                                                                      2. 1

                                                                                        We actually use an OSS extension veil[0], and while performance can be an issue, like @mordae mentions, if you are careful about your use, it’s not to bad. We have only had a few performance issues here and there, but with explain and some thinking we have always managed to work around it without much hassle. You absolutely want indexes on the things you are using for checking permissions with.

                                                                                        Veil makes the performance a lot less painful, in our experience.

                                                                                        0: https://github.com/marcmunro/veil Though note, veil2 is the successor and more relevant for new implementations, that we don’t currently use(and have no experience with): https://github.com/marcmunro/veil2

                                                                                        Veil2 talks about performance here in 23.1: https://marcmunro.github.io/veil2/html/ar01s23.html

                                                                                      3. 3

                                                                                        Same here, I used row level security everywhere on a project and it was really great!

                                                                                        1. 2

                                                                                          One mistake I’ve made is copy-pasting the same permission checks on several pages of an app. Later I tried to define the permissions all in one place, but still in the application code (using “django-rules”). But you still had to remember to check those permissions when appropriate. Also, when rendering the page, you want to hide or gray-out buttons if you don’t have permission on that action (not for security, just niceness: I’d rather see a disabled button than click and a get 403).

                                                                                          With row-level permissions in the DB, is there a way to ask the DB “Would I have permission to do this update?”

                                                                                          1. 2

                                                                                            Spitballing but maybe you could try run the query in a transaction and then roll it back?

                                                                                            Would be a bit costly because you’d have to run the query twice, once to check permissions and then again to execute, but it might be the simplest solution.

                                                                                            1. 1

                                                                                              Maybe what you need is to define a view that selects the things you can update and use that view to define the RLS. Then you can check whether the thing you want to update is visible through the view.

                                                                                              1. 1

                                                                                                With row-level permissions in the DB, is there a way to ask the DB “Would I have permission to do this update?”

                                                                                                Yes, the permissions are just entries in the DB, so you can query/update whatever access you want(provided you have the access to view/edit those tables).

                                                                                                I’m writing this from memory, so I might be wrong in the details… but what we do is have a canAccess() function that takes a row ID and returns the permissions that user has for that record. So on the view/edit screens/pages/etc, we get the permissions as well returned to us. So it’s no big deal to handle.

                                                                                            2. 1

                                                                                              Follow question: How did you handle customers (accidentally even) writing expensive sql queries?

                                                                                              1. 2

                                                                                                We would admonish them appropriately :) Mostly the issue is making sure they know about the WHERE clause. It hasn’t been much of an issue so far. We have _ui table views, that probably do 90% of what they are wanting anyways, and they know to just use those most of the time. The _ui views flatten the schema out, to make the UI code easier, and use proper where clauses and FK joins, to minimize resource usage.

                                                                                                If our SQL user count grew enough that we couldn’t handle it off-hand like this, we would probably just spin up a RO slave mirror, and let them battle each other over resources and otherwise ignore the problem until we got complaints enough to upgrade resources again.

                                                                                            1. 11

                                                                                              Looking at the list, it feels like the motivation for many of these APIs is to help close the gap between Chromebooks and other platforms. I can’t understand it otherwise.

                                                                                              Web MIDI - really? Is there ever going to be a world where music professionals are going to want to work in the web browser instead of in for-purpose software that is designed for high-fidelity low-latency audio?

                                                                                              1. 7

                                                                                                For Web MIDI there are some nice uses, for example, Sightreading Training - this site heavily benefits from being able to use a connected MIDI keyboard as a controller, rather than having the user use their regular keyboards as a piano, which is pretty impractica (and limited).

                                                                                                Another website which uses the Web MIDI API is op1.fun - it uses MIDI to let you try out the sample packs right on the website, without downloading it.

                                                                                                So no, it’s probably never going to be used for music production, but it’s nice for trying things out.

                                                                                                1. 17

                                                                                                  Not everything that’s “nice” should be shipped to millions of people, “just in case”.

                                                                                                  1. 8

                                                                                                    Which is why these APIs should be behind a permission prompt (like the notification or camera APIs). Don’t want it, it stays off, if you want it, you can let only the sites that will actually use it for something good have access.

                                                                                                    1. 4

                                                                                                      yeah +1 on this. So many of these things could be behind a permission. It feels really weird to hear a lot of these arguments when we have webcam integration in the browser and its behind a permission. Like one of the most invasive things are already in there and the security model seems to work exceptionally well!

                                                                                                      The browser is one of the most successful sandboxes ever, so having it be an application platform overall is a Good Thing(TM). There’s still a balance, but “request permission to your MIDI/USB devices” seems to be something that falls into existing interaction patterns just like webcams.

                                                                                                      Stuff like Battery Status I feel like you would need to offer a stronger justification, but even stuff like Bluetooth LE would be helpful for things like conference attendees not needing to install an app to do some stuff.

                                                                                                      1. 3

                                                                                                        I don’t fully understand why webUSB, webMIDI etc permissions are designed the way they are (“grant access to a class of device” rather than “user selects a device when granting access”).

                                                                                                        I want some sites to be able to use my regular webcam. I don’t want any to have access to my HMD cam/mic because those are for playing VR games and I don’t do that in Firefox.

                                                                                                        However, Firefox will only offer sites “here’s a list of devices in no particular order; have fun guessing which the user wants, and implement your own UI for choosing between them”.

                                                                                                2. 6

                                                                                                  Don’t forget Android Go - Google’s strategy of using PWAs to replace traditional Android apps.

                                                                                                  Is there ever going to be a world where music professionals are going to want to work in the web browser instead of in for-purpose software that is designed for high-fidelity low-latency audio?

                                                                                                  Was there ever going to be a world where programmers want to edit code in a browser rather than in a for-purpose editor? Turns out, yes there is, and anyway, what programmers want doesn’t really matter that much.

                                                                                                  1. 3

                                                                                                    Yes, web MIDI is useful. Novation has online patch editors and librarians for some of their synths, like the Circuit. I’ve seen other unofficial editors and sequencers. And there are some interesting JS-based synths and sample players that work best with a MIDI keyboard.

                                                                                                    It’s been annoying me for years that Safari doesn’t support MIDI, but I never knew why.

                                                                                                    MIDI doesn’t carry audio, anyway, just notes and control signals. You’d have the audio from the synth routed to your DAW app, or recording hardware, or just to a mixer if you’re playing live.

                                                                                                    1. 3

                                                                                                      Apple’s concern seems to be that WebMIDI is a fringe use-case that would be most popular for finger printing (e.g. figuring out which configuration your OS and sound chipset drivers offer, as an additional bit of information about the system and/or user configuration).

                                                                                                      I’d love to see such features present but locked away behind user opt-in but then it’s still effort to implement and that where this devolves into a resource allocation problem given all the other things Apple can use Safari developers for.

                                                                                                      1. 4

                                                                                                        It’s not just fingerprinting that’s the concern.

                                                                                                        The WebMIDI standard allows sites to send control signals (SysEx commands to load new patches and firmware updates) to MIDI devices. The concern is that malicious sites may be able to exploit this as an attack vector: take advantage of the fact that MIDI firmware isn’t written in a security conscious way to overwrite some new executable code via a malicious SysEx, and then turn around and use the MIDI device’s direct USB connection to attack the host.

                                                                                                        1. 1

                                                                                                          Could this be prevented by making the use of Web MIDI limited to localhost only?

                                                                                                          1. 2

                                                                                                            at that point you’re requiring a locally run application (if nothing else a server). In which case you might as well just have a platform app - if nothing else you can use a webengine with additional injected APIs (which things like electron do).

                                                                                                            1. 1

                                                                                                              Ok, thanks for your reply.

                                                                                                        2. 1

                                                                                                          It may be worth pointing out that Apple also has some incentive to protect its ecosystem of native music apps.

                                                                                                    1. 2

                                                                                                      A question borne out of curiosity: What does Ansible and Kubes get you for a small homelab setup over well-managed and behaved systemd units run on one Linux install deployed with something like nixops?

                                                                                                      1. 2

                                                                                                        You can think of ansible as the equivalent to nixops in the realm of stateful system management. Both provide secrets management, orchestration across multiple machines and varying degrees of infra as code. The main difference to nixops is that it works across operating system boundaries without remote builder setups.

                                                                                                        1. 2

                                                                                                          The main difference to nixops is that it works across operating system boundaries

                                                                                                          That’s what I’ve experienced in my (albeit limited) ansible experience. You can generally point it at an existing install and it’ll do the “right thing”.

                                                                                                          without remote builder setups

                                                                                                          Do you really need a remote builder for nixops? I’ve had good experience with local builds. Provided, you need to have a beefy enough build machine if things get complicated, but I guess that’s what the binary caches are for. Is that what you mean?

                                                                                                          1. 1

                                                                                                            Do you really need a remote builder for nixops?

                                                                                                            I tried using it to provision a Linux box from a Darwin machine; cross-compilation is quite messy. (Also I’d recommend nixus, it’s far less annoying to operate than nixops!)

                                                                                                        2. 1

                                                                                                          Depends on what your goals are for the homelab. In my case I like the chance to try to understand tools I use at work better at home.

                                                                                                          Several years ago, my company was moving from Puppet to Ansible for our infrastructure management several years ago and I spent some weeks writing Ansible roles in different ways for each component of my homelab. It grew my skillset and I didn’t have to waste a lot of billable hours on learning a new tool.

                                                                                                          Similarly with Kubernetes, which we’re currently implementing to host our web applications, I’ve been writing my own Dockerfiles and standing them up in a minikube instance at home to get a feel for the environment.

                                                                                                          1. 1

                                                                                                            It grew my skillset and I didn’t have to waste a lot of billable hours on learning a new tool.

                                                                                                            Why is that a good thing? Are you expected to know everything at work?

                                                                                                            1. 1

                                                                                                              No? But it sure does make me a much more effective and valuable part of my team. And I like learning?

                                                                                                              1. 1

                                                                                                                If you’re billing hourly, usually yes.

                                                                                                          1. 3

                                                                                                            I’ve yet to find an office chair that doesn’t suck. It’s frustrating that the return period for these multi-hundred dollar ‘investments’ is shorter than the length of time it takes me to determine that the chair causes me some new pain/discomfort. Other than reading posts where people win the chair fitting lottery and just talk about how they found the perfect one (for them), is there some better information to guide folks to finding chairs that can be adjusted to fit them adequately?

                                                                                                            1. 2

                                                                                                              I’ve had many.

                                                                                                              Eventually I picked up a second hand Mirra (rrp over 1k), which is adjustable enough for most people. It has served me well for 5 years now.

                                                                                                              Herman Miller (the Mirra manufacturer) started out making hospital equipment for convalescent patients, so they understand how to support people even when their bodies aren’t holding firm).

                                                                                                              AFAICT Nobody can make a really good chair for only a few hundred dollars.

                                                                                                            1. 6

                                                                                                              It is simple (and cheap) to run your own mail server, they even sell them pre baked these days as the author wrote.

                                                                                                              What is hard and requires time is server administration (security, backups, availability, …) and $vendor black-holing your emails because it’s Friday… That’s not so hard that I’d let someone else read my emails, but YMMV. :)

                                                                                                              1. 8

                                                                                                                not so hard that I’d let someone else read my emails

                                                                                                                Only if your correspondants also host their own mail. Realistically, nearly all of them use gmail, so G gets to read all your email.

                                                                                                                1. 4

                                                                                                                  I have remarkably few contacts on GMail, so G does not get to read all my email, but you’re going to say that I’m a drop in the ocean. So be it.

                                                                                                                  1. 4

                                                                                                                    you’re going to say that I’m a drop in the ocean. So be it.

                                                                                                                    I don’t know what gave you that impression. I also host my own email. Most of my contacts use gmail. Some don’t. I just don’t think you can assume that anyone isn’t reading your email unless you use pgp or similar.

                                                                                                                    1. 1

                                                                                                                      Hopefully Autocrypt adoption will help.

                                                                                                                      1. 2

                                                                                                                        This is the first time I’m hearing of Autocrypt. It looks like just a wrapper around PGP encrypted email?

                                                                                                                        1. 1

                                                                                                                          This is a practice described by a standard, that help widspread use of PGP : by flowing the keys all all around.

                                                                                                                          What if every cleartext email you received did already have a public PGP key attached to it, and that the mail client of everyone was having its own key, and did like so: sending the keys on every new cleartext mail?

                                                                                                                          Then you could answer to anyone with a PGP-encrypted message, and write new messages to everyone encrypted? That would bring a first level where every communication is encrypted with some not-so-string model where you exchanged your keys by whispering out every byte of the public key in base64 to someone’s ear alone in alaska, but as a first step, you brought many more people to use PGP.

                                                                                                                          I think that is the spirit, more info on https://autocrypt.org/ and https://www.invidio.us/watch?v=Jvznib8XJZ8

                                                                                                                          1. 2

                                                                                                                            Unless I misunderstand, this still doesn’t encrypt subject lines or recipient addresses.

                                                                                                                            1. 1

                                                                                                                              Like you said. There is an ongoing discussion for fixing it for all PGP at once, including Autocrypt as a side effect, but this is a different concern.

                                                                                                                  2. 1

                                                                                                                    Google gets to read those emails, but doesn’t get to read things like password reset emails or account reminders. Google therefore doesn’t know which email addresses I’ve used to give to different services.

                                                                                                                  3. 4

                                                                                                                    Maybe I’m just out of practice, but last time I set up email (last year, postfix and dovecot) the “$vendor black-holing your emails” problem was the whole problem. There were some hard-to-diagnose problems with DKIM, SPF, and other “it’s not your email, it’s your DNS” issues that I could only resolve by sending emails and seeing if they got delivered, and even with those resolved emails that got delivered would often end up in spam folders because people black-holed my TLD, which I couldn’t do anything about. As far as I’m concerned, email has been effectively embraced, extended, and extinguished by the big providers.

                                                                                                                    1. 4

                                                                                                                      This was my experience when I set up and ran my own email server: everything worked perfectly end to end, success reports at each step … until it came time to the core requirement of “seeing my email in someone’s inbox”. Spam folder. 100% of the time. Sometimes I could convince gmail to allow me by getting in their contact/favorite list, sometimes not.

                                                                                                                      1. 1

                                                                                                                        I wonder how much this is a domain reputation problem. I’ve hosted my own email for well over a decade and not encountered this at all, but the domain that I use predates gmail and has been sending non-spam email for all that time. Hopefully Google and friends are already trained that it’s a reputable one. I’ve registered a different domain for my mother to use more recently (8 or so years ago) and that she emails a lot of far less technical people than most of my email contacts and has also not reported a problem, but maybe the reputation is shared between the IP and the domain. I do have DKIM set up but I did that fairly recently.

                                                                                                                        It also probably matters that I’ve received email from gmail, yahoo, hotmail, and so on before I’ve sent any. If a new domain appears and sends an email to a mail server, that’s suspicious. If a new domain appears and replies to emails, that’s less suspicious.

                                                                                                                        1. 2

                                                                                                                          Very possible. In my case I’d migrated a domain from a multi-year G-Suite deployment to a self-hosted solution with a clean IP per DNSBLs, SenderScore, Talos, and a handful of others I’ve forgotten about. Heck, I even tried to set up the DNS pieces a month in advance – PTR/MX, add to SPF, etc. – in the off chance some age penalty was happening.

                                                                                                                          I’m sure it’s doable, because people absolutely do it. But at the end of the day the people I cared about emailing got their email through a spiteful oracle that told me everything worked properly while shredding my message. It just wasn’t worth the battle.

                                                                                                                    2. 3

                                                                                                                      That’s not so hard that I’d let someone else read my emails

                                                                                                                      Other than your ISP and anyone they peer with?

                                                                                                                      1. 2

                                                                                                                        I have no idea how bad this is to be honest, but s2s communications between/with major email providers are encrypted these days, right? Yet, if we can’t trust the channel, we can decide to encrypt our communication too, but that’s leading to other issues unrelated to self-hosting.

                                                                                                                        Self-hosting stories with titles like “NSA proof your emails” are probably a little over sold 😏, but I like to think that [not being a US citizen] I gain some privacy by hosting those things in the EU. At least, I’m not feeding the giant ad machine, and just that feels nice.

                                                                                                                        1. 7

                                                                                                                          I’m a big ‘self-hosting zealot’ so it pains me to say this…

                                                                                                                          But S2S encryption on mail is opportunistic and unverified.

                                                                                                                          What I mean by that is: even if you configure your MTA to use TLS and prefer it; it really needs to be able to fall back to plaintext given the sheer volume of providers who will both: be unable to recieve and unable to send encrypted mails, as their MTA is not configured to do encryption.

                                                                                                                          It is also true that no MTA I know of will actually verify the TLS CN field or verify a CA chain of a remote server..

                                                                                                                          So, the parent is right, it’s trivially easy to MITM email.

                                                                                                                          1. 3

                                                                                                                            So, the parent is right, it’s trivially easy to MITM email.

                                                                                                                            That is true, but opportunistic and unverified encryption did defeat passive global adversaries or a passive MITM. These days you have to become active as an attacker in order to read mail, which is harder to do on a massive scale without leaving traces than staying passive. I think there is some value in this post-Snowden situation.

                                                                                                                            1. 1

                                                                                                                              What I’ve done in the past is force TLS on all the major providers. That way lots of my email can’t be downgraded, even if the long tail can be. MTA-STS is a thing now though, so hopefully deploying that can help too. (I haven’t actually done that yet so I don’t actually know how hard it is. I know the Postfix author said implementation would be hard though.)

                                                                                                                        2. 1

                                                                                                                          I get maybe 3-4 important emails a year (ignoring work). The rest is marketing garbage, shipping updates, or other fluff. So while I like the idea of self hosting email, I have exactly zero reason to. Until it’s as simple as signing up for gmail, as cheap as $0, and requires zero server administration time to assure world class deliverability, I will continue to use gmail. And that’s perfectly fine.

                                                                                                                          1. 7

                                                                                                                            Yeah, I don’t want self-hosted email to be the hill I die on. The stress/time/energy of maintaining a server can be directed towards more important things, IMO

                                                                                                                        1. 1

                                                                                                                          If they are vulnerable to plain, basic, well-known XSS like this, i wonder what are they using to render the web pages? Because every modern language/framework covers this by default.

                                                                                                                          is this PHP and they are ignoring the security practices? CGI? nothing else even comes to my mind.

                                                                                                                          1. 2

                                                                                                                            Note that this was found and fixed a number of years ago.

                                                                                                                            1. 1

                                                                                                                              Every team has their own bit IIRC.

                                                                                                                              Of course, that means that if any one team messes up, the whole thing is vulnerable.

                                                                                                                            1. -3

                                                                                                                              the biggest thing since bitcoin

                                                                                                                              Meaning, “fails at its primary (only) purpose and only useful for running Ponzi schemes, while accelerating climate change?”

                                                                                                                              I haven’t read the article; the headline turned me off already.

                                                                                                                              1. 4

                                                                                                                                You might want to actually give it a shot. Take it from someone who hates these headlines too.

                                                                                                                                1. 0

                                                                                                                                  I’ve read some (what I think will be) more nuanced posts on GPT-3. I guess it’s interesting, but I’m not really invested enough to have formed an opinion on this one (yet).

                                                                                                                                  1. 8

                                                                                                                                    No seriously, read the article.

                                                                                                                                    1. 11

                                                                                                                                      This article is a great litmus test for people who ignore or flag things based on the headline.

                                                                                                                                2. 3

                                                                                                                                  How does Bitcoin fail at being a peer-to-peer electronic cash system?

                                                                                                                                  1. 1

                                                                                                                                    It’s too slow to replace cash. People do the bulk of the transactions off-chain, which kinda defeats the purpose.

                                                                                                                                    1. 3

                                                                                                                                      Your main criticism is “it’s too slow”? All digital money is slow. It only looks fast because banks take on the risk of digital money transfers and give you the benefit of the doubt. For “digital cash”, I’d say 10 minutes is pretty good.

                                                                                                                                      1. 4

                                                                                                                                        banks take on the risk of digital money transfers and give you the benefit of the doubt

                                                                                                                                        That’s kind of a killer feature, though.

                                                                                                                                        1. 3

                                                                                                                                          If you desperately need that kind of thing, yes. Bitcoin provides benefits traditional money and banking doesn’t, hence it’s existence. There is nothing preventing banking solutions on top of Bitcoin.

                                                                                                                                          1. 2

                                                                                                                                            The primary benefits of Bitcoon are lack of regulation and high volatility due to same, and a secondary benefit of being distributed with no bias towards societal economic utility for the people getting lucky while mining.

                                                                                                                                        2. 1

                                                                                                                                          You just got done comparing it to cash, not debit or credit card transactions. Cash is instantaneous. Credit cards have fraud detection, which Bitcoin lacks.

                                                                                                                                          1. 2

                                                                                                                                            How is cash instantaneous acorss the ocean?

                                                                                                                                            1. 1

                                                                                                                                              I’m not sure why I need to say this but transporting money is not the same as exchanging it

                                                                                                                                          2. 1

                                                                                                                                            Most bank transfers days 2 days anyway

                                                                                                                                    1. 5

                                                                                                                                      This is an incredibly awful article, I don’t know where to start.

                                                                                                                                      This allows us to instantly see how the nested functions close over the outer functions.

                                                                                                                                      We already have something for this. It’s called indentation. Comprehending his example code was no easier than if it were completely unhighlighted. I’m curious if any other Lobsters did.

                                                                                                                                      Syntax coloring isn’t useless, it is childish, like training wheels or school paste. It is great for a while, and then you might grow up.

                                                                                                                                      Got it, Real Men ™ program in monochrome. I hear they also only program in Fortran, Lisp, and assembly, unlike all those other, childish languages.

                                                                                                                                      I no longer need help in separating operators from numbers.

                                                                                                                                      No one uses syntax highlighting to differentiate operators from numbers. The most common use cases by far is highlighting of string literals, comments, and keywords. These are all very valuable. Syntax highlighting is a way to quickly find the end of a comment or a long string literal that might otherwise take some conscious effort to find. And keywords are highlighted because it’s often very easy to mistake one with an identifier, due to the syntax of most languages.

                                                                                                                                      But assistance in finding the functions and their contexts and influences is valuable.

                                                                                                                                      Scope does not help you find “influence”. The “context” of code is understandable in the whole. No amount of syntax highlighting will help you understand why something is inside an “if” instead of its parent scope.

                                                                                                                                      1. 2

                                                                                                                                        Bizarre. I find turning off syntax coloring useful when I’m trying to learn new syntax (because it makes it harder to read).

                                                                                                                                        I turn it back on when I want to get work done because it makes me faster.

                                                                                                                                        1. 2

                                                                                                                                          We already have something for this. It’s called indentation.

                                                                                                                                          Look more closely at the green encoder references at the bottom. In a language with closures, it’s not a one-to-one correspondence between indentation and what he has proposed.

                                                                                                                                        1. 1

                                                                                                                                          How would that be enforced exactly?

                                                                                                                                          1. 4

                                                                                                                                            As I understand it the ruling is “Storing customer data in the US is not compatible with GDPR compliance”, so it would be enforced using the existing GDPR enforcement regime.

                                                                                                                                            1. 6

                                                                                                                                              Sure, but where can you store a chat conversation between European and USA citizens ?

                                                                                                                                              1. 4

                                                                                                                                                In Europe

                                                                                                                                                1. 3

                                                                                                                                                  On their own devices. Use end-to-end encryption while you still can (but that’s a good question in general)

                                                                                                                                                2. 2

                                                                                                                                                  The CLOUD Act seems to be removing the distinction between data stored in the USA versus data stored abroad when it comes to US companies. As far as I understand it, the act in a way extends American jurisdiction to every country where the server of an American company is located, so perhaps a more important thing EU states can do in this regard is not entering CLOUD Act agreements with the US at all? I’m only partially trolling.

                                                                                                                                                3. 0

                                                                                                                                                  Why, by giving EU States complete access to their data feeds, of course.

                                                                                                                                                  I wonder if I’m being paranoid by seeing this as a subtle play for warrantless surveillance?

                                                                                                                                                  1. 11

                                                                                                                                                    I think it’s far more likely that it will be enforced with the possibility of outlandish fines or loss of market access if found to be in violation of the law. That would (roughly) align with how other data privacy regulations are established in the EU.

                                                                                                                                                    A gross expansion of warrantless surveillance seems quite unlikely in the EU, as there is a cultural belief that data about one’s self belongs to one’s self which is in contrast to the American culture where data about one’s self is typically viewed as belonging to whoever collected the data.

                                                                                                                                                    1. 20

                                                                                                                                                      In case anyone’s wondering what the deal is here: lots of European countries, especially in Eastern and Central Europe, but also some Western European countries (e.g. Germany) have a bit of a… history with indiscriminate data collection and surveillance. Even those of us who are young enough not to have been under some form of special surveillance are nonetheless familiar with the concept, and had our parents or grandparents subjected to it. (And note that the bar for “young enough” is pretty low; I have a friend who was regularly tailed when he was 12). And whereas you had to do something more or less suspicious to be placed under special surveillance (which included things like having bugs planted in your house and phones being tapped), “general” surveillance was pretty much for everyone. You could generally expect that conversations in your workplace, for example, would be listened to and reported. With the added bonus of the fact that recording and surveillance equipment wasn’t as ubiquitous and cheap as it was today, so it was usually reported by informers.

                                                                                                                                                      Granted, totalitarian authorities beyond the Iron Curtain largely employed state agencies, not private companies for their surveillance operations – at least on their own territory – but that doesn’t mean the very few private enterprises, limited in scope as they were, couldn’t be coopted into any operation. And, of course, the Fascist regimes that flourished in Western Europe for a brief period of time totally partnered with private enterprises if they could. IBM is the notorious example but there were plenty of others.

                                                                                                                                                      Consequently, lots of people here are extremely suspicious about these things. Those who haven’t already experienced the consequences of indiscriminate surveillance have the cautionary tales of those who did, at least for another 20-30 years. If someone doesn’t express any real concern, it’s often either because a) they don’t realize the scope of data collection, or b) they’ve long come to terms with the idea of surveillance and are content with the fact that any amount of data collection won’t reveal anything suspicious. My parents fall in the latter category – my dad was in the air force so it’s pretty safe to assume that we were under some form of surveillance pretty much all the time. Probably even after the Iron Curtain fell, too, who knows. But most of us, who were very quickly hushed if they said the wrong thing at a family dinner or whatever because “you can’t say things like that when others are listening”, aren’t fans of this stuff at all.

                                                                                                                                                      Edit: Basically, it’s not just a question of who this data belongs to – it’s a pretty deeply-ingrained belief that collecting large swaths of data is a bad idea. The commercial purpose sort of limits the public response but the only reason why that worked well so far is that, politically, this is a hot potato, so there’s still an overall impression that the primary driving force behind data collection is private enterprise. As soon as there’s some indication that the state might get near that sort of data, tempers start running hot.

                                                                                                                                                      1. 5

                                                                                                                                                        For more details on this, Wikipedia’s entry on Stasi, the security service of East Germany, is a great read. Stasi maintained detailed files (on paper!) on millions of East Germans. Files were kept on shelves, and shelves were >100 kilometers(!) long when East Germany fell.

                                                                                                                                                        It is easy to imagine why Facebook’s data collection reminds people of Stasi files.

                                                                                                                                                        1. 1

                                                                                                                                                          There were some amazing stories floating around in 1989 – like, the Stasi were sneaking across the border into the West to buy shredders, because they couldn’t shred the documents fast enough; and the army of older ladies who have been painstakingly reassembling the bags and bags and bags of shredded documents.

                                                                                                                                                        2. 3

                                                                                                                                                          To be fair with powers shifting, companies consolidating, individuals having the same money and thereby power of whole governments, and individual companies or partnering ones not only being owrking individual sectors anymore and governments outsourcing more and more of their stuff (infrastructure (IT & non IT), security, etc. and corporations creating pretty much whole towns for their employees and oftentimes families they overall become more similar to governments, but usually with fewer guarantees by things like constitutions.

                                                                                                                                                          1. 2

                                                                                                                                                            Absolutely. There’s been talk of a “minimal state” for decades now, but no talk of a “minimal company”. Between their lack of accountability, the complete lack of transparency, and the steady increase of available funds, I think the leniency we’re granting private enterprises is short-sighted. But that’s a whole other story :).

                                                                                                                                                      2. 5

                                                                                                                                                        The US actually claims the right to warrantless surveillance of non-US citizens, through FISA. Additionally, through the CLOUD act, they claim the right to request personal information from US companies, even if this information is not stored on US soil.

                                                                                                                                                        Looking at the political side of things, many EU lawmakers are perfectly fine with engaging in a little protectionism for European IT companies, and if EU privacy law makes life difficult for FAANG, that’s perfect. On the other hand, the US is trying to use the world dominance of its IT companies as a way to extend the reach of its justice and surveillance system.

                                                                                                                                                        Then there are FAANG-paid lobbyists, who keep pushing for treaties that claim the US extends protections to EU citizens’ data, even though it clearly doesn’t. They don’t last long once they get taken to court. This is why some US tech companies, like Salesforce, are now lobbying for a data protection regime in the US - this would be one way to reconcile this difference.

                                                                                                                                                        This is a trade war, and the victims are smaller US companies that shy away from doing business in the EU.

                                                                                                                                                    1. 2

                                                                                                                                                      The author provided GPT-3 with training data where all the questions have definite answers, and the answer is always “X is Y”.

                                                                                                                                                      Unless I missed something, there was no training data where the answer is, “That makes no sense.”, or “I don’t know.” or “Actually, the U.S. did not exist as such in 1700, so there was no president.”

                                                                                                                                                      Is it any wonder that GPT-3 followed suit and answered all the questions the same way, with the best approximation?

                                                                                                                                                      I don’t think I would expect any more from a human either, if the human’s knowledge base was somehow made a clean slate, e.g. a child human.

                                                                                                                                                      If you were training a child in this manner, you’d probably get similar results.

                                                                                                                                                      Also, there was no opportunity for re-training. When you’re teaching a child, they’re bound to get some answers wrong, and then you would correct them, and they would also learn from that.

                                                                                                                                                      No such opportunity was provided here, though I don’t know if that is technically possible with GPT-3.

                                                                                                                                                      1. 3

                                                                                                                                                        The author provided GPT-3 with training data where all the questions have definite answers, and the answer is always “X is Y”.

                                                                                                                                                        The training data for GPT algorithms is just massive amounts of English language text from the Internet, isn’t it? I’m not sure how that’s consistent with “questions that have definite answers” - most of the training data text wouldn’t be in the form of any kind of question because most English language sentences are not questions.

                                                                                                                                                        1. 2

                                                                                                                                                          Training data is the wrong term - this is better termed “prompt data”, which is used to “set the stage” for GPT predictions.

                                                                                                                                                        2. 2

                                                                                                                                                          I’m unsure if GPT-3 can respond like that, although that would be an interesting thing to add to this. Another option would be to create some sort of autoencoder framework that lets the network determine when it is responding to something it’s never really seen before. Uber has a very interesting write-up about doing that.

                                                                                                                                                        1. 24

                                                                                                                                                          That headline is pretty confusing. It seems more likely twitter itself was compromised, than tons of individual users (billionaires, ex-leaders, etc)?

                                                                                                                                                          1. 18

                                                                                                                                                            You’re right. This is a case of Verge reporting what they’re seeing, but the scope has grown greatly since the initial posts. There have since been similar posts to several dozen prominent accounts, and Gemini replied that it has 2FA.

                                                                                                                                                            Given the scope, this likely isn’t accounts being hacked. I suspect that either the platform or an elevated-rights Twitter content admin has been compromised.

                                                                                                                                                            1. 12

                                                                                                                                                              Twitter released a new API today (or was about to release it? Not entirely clear to me what the exact timeline is here), my money is on that being related.

                                                                                                                                                              A ~$110k scam is a comparatively mild result considering the potential for such an attack, assuming there isn’t some 4D chess game going on as some are suggesting on HN (personally, I doubt there is). I don’t think it would be an exaggeration to say that in the hands of the wrong people, this could have the potential to tip election results or even get people killed (e.g. by encouraging the “Boogaloo” people and/or exploiting the unrest relating to racial tensions in the US from some strategic accounts or whatnot).

                                                                                                                                                              As an aside, I wonder if this will contribute to the “mainstreaming” digital signing to verify the authenticity of what someone said.

                                                                                                                                                              1. 13

                                                                                                                                                                or even get people killed

                                                                                                                                                                If the Donald Trump account had tweeted that an attack on China was imminent there could’ve been nuclear war.

                                                                                                                                                                Sounds far-fetched, but this very nearly happened with Russia during the cold war when Reagan joked “My fellow Americans, I’m pleased to tell you today that I’ve signed legislation that will outlaw Russia forever. We begin bombing in five minutes.” into a microphone he didn’t realize was live.

                                                                                                                                                                1. 10

                                                                                                                                                                  Wikipedia article about the incident: https://en.wikipedia.org/wiki/We_begin_bombing_in_five_minutes

                                                                                                                                                                  I don’t think things would have escalated to a nuclear war that quickly; there are some tensions between the US and China right now, but they don’t run that high, and a nuclear war is very much not in China’s (or anyone’s) interest. I wouldn’t care to run an experiment on this though 😬

                                                                                                                                                                  Even in the Reagan incident things didn’t seem to have escalated quite that badly (at least, in my reading of that Wikipedia article).

                                                                                                                                                                  1. 3

                                                                                                                                                                    Haha. Great tidbit of history here. Reminded me of this 80’s gem.

                                                                                                                                                                    1. 2

                                                                                                                                                                      You’re right - it would probably have gone nowhere.

                                                                                                                                                                  2. 6

                                                                                                                                                                    I wonder if this will contribute to the “mainstreaming” digital signing to verify the authenticity of what someone said

                                                                                                                                                                    It’d be nice to think so.

                                                                                                                                                                    It would be somewhat humorous if an attack on the internet’s drive-by insult site led to such a thing, rather than the last two decades of phishing attacks targeting financial institutions and the like.

                                                                                                                                                                    1. 3

                                                                                                                                                                      I wonder if this will contribute to the “mainstreaming” digital signing to verify the authenticity of what someone said.

                                                                                                                                                                      A built-in system in the browser could create a 2FA system while being transparent to the users.

                                                                                                                                                                      1. 5

                                                                                                                                                                        2fa wouldn’t help here - the tweets were posted via user impersonation functionality, not direct account attacks.

                                                                                                                                                                        1. 0

                                                                                                                                                                          If you get access to twitter, or the twitter account, you still won’t have access to the person’s private key, so your tweet is not signed.

                                                                                                                                                                          1. 9

                                                                                                                                                                            Right, which is the basic concept of signed messages… and unrelated to 2 Factor Authentication.

                                                                                                                                                                            1. 2

                                                                                                                                                                              2FA, as I used it, means authenticating the message, via two factors, the first being access to twitter account, and the second, via cryptographically signing the message.

                                                                                                                                                                              1. 3

                                                                                                                                                                                Twitter won’t even implement the editing of published tweets. Assuming they’d add something that implicitely calls their competence in stewarding people’s tweets is a big ask.

                                                                                                                                                                                1. 2

                                                                                                                                                                                  I’m not asking.

                                                                                                                                                                      2. 2

                                                                                                                                                                        A ~$110k scam

                                                                                                                                                                        The attacker could just be sending coins to himself. I really doubt that anyone really falls for a scam where someone you don’t know says “give me some cash and I’ll give you double back”.

                                                                                                                                                                        1. 15

                                                                                                                                                                          I admire the confidence you have in your fellow human beings but I am somewhat surprised the scam only made so little money.

                                                                                                                                                                          I mean, there’s talk about Twitter insiders being paid for this so I would not be surprised if the scammers actually lost money on this.

                                                                                                                                                                          1. 10

                                                                                                                                                                            Unfortunately people do. I’m pretty sure I must have mentioned this before a few months ago, but a few years ago a scammer managed to convince a notary to transfer almost €900k from his escrow account by impersonating the Dutch prime minister with a @gmail.com address and some outlandish story about secret agents, code-breaking savants, and national security (there’s no good write-up of the entire story in English AFAIK, I’ve been meaning to do one for ages).

                                                                                                                                                                            Why do you think people still try to send “I am a prince in Nigeria” scam emails? If you check you spam folder you’ll see that’s literally what they’re still sending (also many other backstories, but I got 2 literal Nigerian ones: one from yesterday and one from the day before that). People fall for it, even though the “Nigerian Prince” is almost synonymous with “scam”.

                                                                                                                                                                            Also, the 30 minute/1 hour time pressure is a good trick to make sure people don’t think too carefully and have to make a snap judgement.

                                                                                                                                                                            As a side-note, Elon Musk doing this is almost believable. My friend sent me just an image overnight and when I woke up to it this morning I was genuinely thinking if it was true or not. Jeff Bezos? Well….

                                                                                                                                                                            1. 12

                                                                                                                                                                              People fall for it, even though the “Nigerian Prince” is almost synonymous with “scam”.

                                                                                                                                                                              I’ve posted this research before but it’s too good to not post again.

                                                                                                                                                                              Advance-fee scams are high touch operations. You typically talk with your victims over phone and email to build up trust as your monetary demands escalate. So anyone who realizes it’s a scam before they send money is a financial loss for the scammer. But the initial email is free.

                                                                                                                                                                              So instead of more logical claims, like “I’m an inside trader who has a small sum of money to launder” you go with a stupidly bold claim that anyone with a tiny bit of common sense, experience, or even the ability to google would reject: foreign prince, huge sums of money, laughable claims. Because you are selecting for the most gullible people with the least amount of work.

                                                                                                                                                                        2. 5

                                                                                                                                                                          My understand is that Twitter has a tool to tweet as any user, and that tool was compromised.

                                                                                                                                                                          Why this tool exists, I have no idea. I can’t think of any circumstance where an employee should have access to such a tool.

                                                                                                                                                                          Twitter has been very tight-lipped about this incident and that’s not a good look for them. (I could go on for paragraphs about all of the fscked up things they’ve done)

                                                                                                                                                                          1. 5

                                                                                                                                                                            or an elevated-rights Twitter content admin

                                                                                                                                                                            I don’t think content admins should be able to make posts on other people’s account. They should only be able to delete or hide stuff. There’s no reason they should be able to post for others, and the potential for abuse is far too high for no gain.

                                                                                                                                                                            1. 6

                                                                                                                                                                              Apparently some privileges allow internal Twitter employees to remove MDA and reset passwords. Not sure how it played out but I assume MFA had to be disabled in some way.

                                                                                                                                                                            1. 5

                                                                                                                                                                              That’s a good article! Vice has updated that headline since you posted to report that the listed accounts got hijacked, which is more accurate. Hacking an individual implies that the breach was in their control: phone, email, etc. This is a twitter operations failure which resulted in existing accounts being given to another party.