Threads for FiloSottile

    1. 8

      I recently looked into it for the same reasons, and if you’re looking for a really no-frills registrar with a world-class security team (which was my priority) it looks like AWS Route53 Domains is really the only choice. The other option would be Google Domains, but I don’t trust their account closure policies.

      The list of TLDs they support has grown significantly, and they have everything I own but Google’s .dev.

      The UI is not particularly friendly, being the AWS console, but I use dnscontrol anyway.

      My bias here is that I generally trust a large business with serious customers that have confidentiality demands like AWS more than small businesses that market themselves as “privacy first” (with few exceptions).

      1. 1

        Interestingly, it looks like AWS isn’t actually a registrar and themselves use Gandi (amongst others) for domain registration: https://aws.amazon.com/route53/domain-registration-agreement/ §2.1.

    2. 20

      There is a very large robot in the room (I’d say elephant but I really like elephants), but it’s been there for so long that people on both sides of this robot just think of it as furniture, and don’t realize how much it’s colouring everyone’s reaction, and that robot’s name is Google.

      I sympathise with the Go team’s position here. They have a product that’s extraordinarily widely deployed. It’s powering many enterprise codebases, where the deprecation of any feature is greeted with boos and jeers. And they need to overcome the sampling bias of surveys where of course everyone who’s still on High Sierra or Windows 7 or Novell Netware or whatever will show up to say they’re still using it, which is also understandable.

      But their mothership has poisoned that well long ago and of course people go bananas. All the assurances about how reports aren’t associated with any identifying information are a lot more lukewarm when they’re coming from one of the companies that made shadow profiles a thing.

      Plus, realistically, this isn’t any more “transparent” than most telemetry efforts made in good faith. A further step, which would warrant the “transparent” label in 2023, rather than 2013, would be, say:

      • Publishing the code that Google runs on the report collection server, so that we know all it does is, indeed, generate the reports being published – because Google has a history of swearing they’re only collecting non-identifying data, then using it to build identifiable profiles.
      • Publishing details about how the reports are used internally, so that we know they’re only used by the Go team for decisions about the golang toolchain – because this is 2023 and there are probably lots of people out there who would be fine with this data going to the golang team to improve their toolchain, but not with having it go to a dev experience team that’s building a Copilot clone.

      Honestly I’m pretty frustrated that a valuable source of development data isn’t so easily accessible anymore – I doubt both that the Go team is in cahoots with evil schemers at Alphabet. I’m also probably not going to opt out, given that there’s not much value in the additional Go-related telemetry data, given that Google likely already knows what porn I watch and what early ’90s demos I liked so they can identify me well enough. But this is a hole that Google has dug, for everyone, not just for them.

      1. 14

        Thank you for the level-headed and nuanced comment.

        The posts are pretty long, so some stuff is bound to get lost, but the points you suggest are already part of the design. It goes even a little further, in fact: it’s not just the reports that are published, but all raw data that’s uploaded. In that sense, they can’t promise the data is going only to the Go team, but that’s because the data is public and anyone can download it.

        The full raw data as collected is made public, so that project maintainers have no proprietary advantage or insights in their role as the direct data collector.

        The server that collects the data will be open source, but the only thing that shows is that IP addresses are not collected.

        The server would necessarily observe the source IP address in the TCP session uploading the report, but the server would not record that address with the data, a fact that can be confirmed by inspecting the reporting server source code (the server would be open source like the rest of Go) or by reference to a stated privacy policy like the one for the Go module mirror, depending on whether you lean more toward trusting software engineers or lawyers.

        1. 3

          Yeah, that second one totally got lost, I should not be allowed near computers before I’ve had my coffee :-D. Thanks for pointing that out way more nicely than my drowsy post would’ve warranted!

        2. 1

          I quite like open telemetry data in open source. One other instance I can think of is Debian’s opt-in popularity contest (popcon), which records number of users for each package: https://popcon.debian.org/

          If you can be confident the data is safe to publish, and you actually do publish the data, that would personally make me feel much better about telemetry.

      2. 2

        To warrant the “transparent” label in 2023 it’d need … publishing … reports … swearing … assurances … pinky swears …

        Problem is, do we even believe any of those? Should we? How about actually doing reproducible-build-style confirmation that we can actually check? Otherwise I’d say, especially in 2023 (i.e. in light of all the awful stuff that’s still getting relentlessly discovered 10 years after 2013 (/Snowden)), I’m not going to believe pretty much any of it, from any of them.

      3. -1

        Google wants to keep track of people developing, for example, encryption software, so they can report those people to governments which will punish them. That’s why they’re doing this.

        1. 2

          That’s plausible, but lots of things are plausible. Do you have any concrete reason to suspect this, of all plausible reasons, is the real one?

          1. 1

            That’s plausible, but lots of things are plausible. Do you have any concrete reason to suspect this, of all plausible reasons, is the real one?

            More likely, the reason this is being done is that Google has a love affair with telemetry, analytics, and data. Their corporate motto should be “All your data are belong to us.” The most important tool in their toolbox is surveillance.

            Google makes its money by optimizing for the most ads delivered to the most people, and the tool they use to accomplish that is surveillance.

            When you work in an environment that sees “gather more data” as the solution to everything, it’s going to rub off on you.

            I’m doubtful that this is overt malice, and I think it far more likely that it’s just due to a bug in Google / Silicon Valley culture.

      4. -4

        The reason Google can’t do opt-in telemetry is that Google wants the Go toolchain to report on politically sensitive software being developed with it and tell that information to governments which will punish the developers. This is, at this point, a fait accompli and our protests won’t change it, so the only moral thing to do is to poison the telemetry as much as possible.

        1. 6

          This is a loaded claim; what kind of politically sensitive software would be impacted? What metrics would it use to determine if something if “politically sensitive?” Is there any evidence for this?

    3. 36

      [Speaking with no hat on, and note I am not at Google anymore]

      Any origin can ask to be excluded from automatic refreshes. SourceHut is aware of this, but has not elected to do so. That would stop all automated traffic from the mirror, and SourceHut could send a simple HTTP GET to proxy.golang.org for new tags if it wished not to wait for users to request new versions. That would have caused no disruption.

      This is definitely a manual mechanism, but I understand why the team has not built an automated process for something that was requested a total of two times, AFAIK. Even if this process was automated, relying on the general robots.txt feels inappropriate to me, given this is not web traffic, so it would still require an explicit change to signal a preference to the Go Modules Mirror, taking about as much effort from origin operators as opening an issue or sending an email.

      Everyone can make their own assessment of what is a reasonable default and what counts as a DoS (and they are welcome to opt-out of any traffic), but note that 4GB per day is 0.3704 Mbps.

      I don’t have access to the internal mirror architecture and I don’t remember it well, nor would I comment on it if I did, but I’ll mention that claims like a single repository being fetched “over 100 times per hour” sound unlikely and incompatible with other public claims on the public issue tracker, unless those repositories host multiple modules. Likewise, it can be easily experimentally verified that fetchers don’t in fact operate without coordination.

      1. 83

        Sounds like it’s 4 GB per day per module, and presumably there are a lot of modules.

        The more I think about it, the more outrageous it seems. Google’s a giant company with piles of cash, and they’re cutting corners and pushing work (and costs) off to unrelated small and tiny projects?

        They really expect people with no knowledge of Go whatsoever (Git hosting providers) will magically know to visit an obscure GitHub issue and request to be excluded from this potential DoS?

        Why is the process so secretive and obscure? Why not make the proxy opt-in for both users and origins? As a user, I don’t want my requests (no, not even my Go module selection) going to an adware/spyware company.

        1. 3

          It’s a question of reliability. Go apps import URLs. Imagine a large Go app that depends on 32 websites. Even if those websites are 99.9% reliable (and very few are) then (1-.999**32)*100 means there’s a 3.15% chance your build will fail. I think companies like creating these kinds of problems, since the only solution ends up yielding a treasure trove of business intelligence. The CIA loves funding things like package managers. However it becomes harder to come across looking like the good guys when you get lazy writing the backend and shaft service operators, who not only have to pay enormous egress bandwidth fees, but are also denied any visibility into who and how many people their resources are actually supporting.

          1. 3

            Go apps import URLs. Imagine a large Go app that depends on 32 websites. Even if those websites are 99.9% reliable (and very few are) then (1-.999**32)*100 means there’s a 3.15% chance your build will fail.

            I do hope they do the sane thing and only try to download packages when you mash the update button, instead of every time you do yet another debug build? Having updates fail from time to time is annoying for sure, but it’s not a “sorry boss, can’t test anything today, build is failing because left pad is down” kind of hard blocker.

            1. 4

              Go has a local cache and only polls for lib changes when explicitly told to do.

              1. 2

                Thanks. I was worried there for a minute.

            2. 2

              If you have CI that builds on every commit and you don’t take extra steps to set up cache for it, you will download the packages on every commit

              1. 1

                Ah… well I remember having our CI at work failing semi-infrequently because of a random network problem. Often restarting it was enough. But damn was this annoying. All external dependencies should be cached and locked in some way, so the CI provides a stable, reliable environment.

                For instance, CI shouldn’t have to build a brand knew Docker image or equivalent each time it does its thing. It should instead depend on a standard image with the dependencies we need and everyone uses. Only when we update those external dependencies should the image be refreshed.

          2. 1

            I have a lot of sympathy with Google here. I am using vcpkg for some personal projects and hit a problem last year where the canonical source of the libxml2 sources (which I needed as an indirect dependency of something else) was down. Unlike go and the FreeBSD ports infrastructure, vcpkg does not maintain a cache of the distribution files and so it was impossible to build my local project until I found a random mirror of the libxml2 tarball that had the right SHA hash and manually downloaded it.

            That said, 4 GiB/day/repo sounds excessive. I’d expect that the update should need to sync only when it sees a new revision and even if it’s doing a full git clone rather than an update, that’s a surprising amount of traffic.

      2. 88

        Not considering an automated mirror talking HTTP as “web traffic” a thus shouldn’t respect robots.txt is definitely a take. And suggesting that “any origin” write a custom integration to workaround Go’s abuse of the git protocol? Cool, put the work on others.

        And according to the blog post, the proxy didn’t provide a user agent until prompted by sr.ht. That kind of poor behaviour makes it hard to open issues or send emails.

        Moreover, I don’t think the blog post claimed 4Gb/day is a DoS. It said a single module could produce that much traffic. It said the total traffic was 70% of their load.

        No empathy for organisations that aren’t operating at Google scale?

        1. 10

          Not considering an automated mirror talking HTTP as “web traffic” a thus shouldn’t respect robots.txt is definitely a take.

          No, I am saying that looking at the Crawl-delay clause of a robots.txt which is probably 1 or 10 (e.g. https://lobste.rs/robots.txt) is malicious compliance at best, since one git clone per second is probably not what the origin meant. Please don’t flame based on the least charitable interpretation, there’s already a lot of that around this issue.

          1. 34

            For what it’s worth, 1 clone per second would still probably be less than what Google is currently sending them. Their metrics are open, and as you can see over the last day they have served about 2.2 clones per second, and if we assume that 70% of clones is from Google, it comes out to roughly 1.6 clones per second.

      3. 27

        I think it’s pretty obvious to any bystander that SourceHut has requested a stop to the automatic refreshes. The phrase “patrician and sadistic” comes to mind when I think about this situation.

        1. 11

          They explicitly have stated in other locations that they have not requested the opt out for automatic refreshes for various reasons.

          1. 28

            Sure, filling out Google’s form legitimizes Google’s actions up to that point. Nonetheless, there was a clear request to stop the excess traffic, and we should not ignore that request simply because it did not fit Google’s bureaucracy.

            1. 8

              I was specifically responding to

              I think it’s pretty obvious to any bystander that SourceHut has requested a stop to the automatic refreshes.

              No, they did not. They have explicitly rejected the option to have them stopped for various reasons, perhaps even the ones you hypothesized.

              1. 18

                I appreciate your position, but I think it’s something of a beware-of-the-leopard situation; it’s quite easy to stand atop a bureaucracy and disclaim responsibility for its actions. We shouldn’t stoop to victim-blaming, even when the victims are not personable.

                1. 6

                  I haven’t taken a position. I’m stating that your statement was factually incorrect. You said that it is “pretty obvious” that they requested something when the exact opposite is true, and I wanted to correct the record.

                  1. 8

                    You are arguing that they did not fill out the Google provided form. The person you’re arguing didn’t say they did, they said they requested Google stops doing the thing.

                    1. 5

                      They did not request that Google stops doing the thing. There is no form to fill out. Literally stating “please stop the automatic refreshes” would be enough. They explicitly want Google to continue doing the thing but at a reasonable rate.

                      1. 19

                        They explicitly want Google to continue doing the thing but at a reasonable rate.

                        Which in my opinion is the only reasonable course of action. Right now Google is imposing a lazy, harmful, and monopolistic dilemma: either suck up the unnecessary traffic and pay for this wasted bandwidth & server power (the default), or seriously hurt your ability to provide Go packages. That’s a false dichotomy, Google can do better. They just won’t.

                        Don’t get me wrong, I’m not blaming any single person in the Go team here, I have no idea what environment they are living in, and what orders they might be receiving. The result nevertheless makes the world a worse place.

                        It’s also a classic: big internet companies give us the same crap about email and spam filtering, where their technical choices just so happen to seriously hamper the effectiveness of small or personal email providers. They have lots of plausible reasons for these choices, but the result is the same: if you want your email to get through, you often need their services. How convenient.

                        1. 6

                          That’s a false dichotomy, Google can do better. They just won’t.

                          You may disagree with the prioritization, but they have made progress and will continue to do so. Saying “they just won’t” is hyperbolic and false.

                          The result nevertheless makes the world a worse place.

                          You have stated that you don’t know what “environment they are living in, and what orders they might be receiving”. What if instead of working on this issue, they worked on something else that, if left unhandled, would have made the world an even worse place? This statement is indicative of your characteristic bad faith when discussing anything about Go.

                          I don’t think everything that the Go developers have done is correct, or that every language decision Go has made is correct, but it’s important to root those judgements in facts and reality instead of uncharitable interpretations and fiction.

                          Because you seem inclined to argue in bad faith about Go both here and in past discussions we’ve had [1], I think any further discussion on this is going to fall on deaf ears, so I won’t be engaging any further on this topic with you.

                          [1] here you realize you don’t have very good knowledge of how Go works (commendable!) and later here you continue to claim knowledge of mistakes they made without having full knowledge of what they even did.

                          1. 6

                            My, the mere fact that you remember my only significant Go thread on Lobsters is surprisingly informative. But… aren’t you going a little off topic here? This is a software distribution and networking issue, nothing to do with the language itself.

                            You have stated that you don’t know what “environment they are living in, and what orders they might be receiving”. What if instead of working on this issue, they worked on something else that, if left unhandled, would have made the world an even worse place?

                            That’s a nice fully general counterargument you have there: no matter what fix or feature I request, you can always say maybe something else should take precedence. Bonus points for quoting me out of context.

                            Now in that quote, “they” is referring to Google’s employees, not Google itself. I’ve seen enough good people working in toxic environments to make the difference between a faceless company and the actual people working there. This was just me trying to criticise Google itself, not any specific person or team.

                            As for your actual argument, Google isn’t exactly resourceless. They can spend money and hire people, so opportunity costs are hardly a thing for them. Moreover, had they cared about bandwidth from the start, they could have planned a decent architecture up front and spent event less time on this.

                            But all this is weaselling around the core issue: Google is wasting other people’s bandwidth, and they ought to stop two years ago. When you’re not a superhuman AI with all self-serving biases programmed out of you, you don’t get to play the “greater good” card without a damn good specific reason. We humans need ethics.

                      2. 10

                        If you were calling someone several times a day and they said “Hey. Don’t call me several times a day. Call me less often. Maybe weekly,” but you persisted in calling them multiple times a day, it would not be a reasonable defense to say “They don’t want me no not call them, they only want me to not call them the amount I am calling them, which I will continue to do.”

                        But like, also you should know better than to bother people like that. They shouldn’t need to ask. It is not reasonable to assume a lack of confrontation is acquiescence to poor behavior. Quit bothering people.

                        1. 3

                          In your hypothetical the caller then said “Sorry for calling you so often. Would you like me to stop calling you entirely while I figure out how to call you less?” and the response was “No, I want you to continue to call me while you figure out how to call me less.”

                          That is materially different than a request to stop all calls.

                          No one is arguing that the request to be called less is unreasonable. I am pointing out that the option to have no calls at all was provided and that they explicitly rejected for whatever reasons they decided. This is not a value judgement on either party, but a clarification of facts.

                          1. 7

                            Don’t ignore the fact that those calls (to continue the analogy) actually contained important information. The way I understand it, being left out of the refresh service significantly hurts the ability of the provider to serve Go packages. It’s really a choice between several calls a day and a job opportunity once a fortnight or so; or no call at all and missing out.

                            Tough choice.

          2. 21

            Yes, instead they have requested that the automatic refreshes be made better.

            Which is a very reasonable request as right now they’re just bad.

      4. 10

        I appreciate where you’re coming from with this. Having VERY recently separated from a MegaCorp this is exactly the logic a bizdev person deciding what work gets funded and what doesn’t would use in this situation.

        But again I ask - is this level of dependence on a corporate owner healthy for a programming language with such massively wide adoption?

        It would be interesting to do a detailed compare and contrast between the two.

        1. 4

          is this level of dependence on a corporate owner healthy for a programming language with such massively wide adoption?

          Java used to have such a dependence. It wasn’t good indeed.

    4. 12

      This pair of articles are mildly interesting observations/tools presented in a very overblown and irresponsible way.

      WebAuthN is meaningfully phishing-safe. You can’t replay a login for attacker.com to example.com. What part 1 (this article) is demonstrating is that if you compromised the victim’s machine you can arbitrarily use a token for as long as it’s connected (just like you can use the victim’s logged in session by stealing cookies). What part 2 is demonstrating is that if you choose server-side to allow subdomains (it’s an option!) and then an attacker takes control of https://subdomain.example.com, they can replay a subdomain login against example.com.

      Needless to say, your average phisher doesn’t have control over the victim’s machine or one of the target’s subdomains. Part 2 is still interesting because you might encounter a combination of server-side misconfiguration and user controlled subdomains (like the deprecated user.github.com ones), but far from an indictment of WebAuthN.

      Arguing that calling WebAuthN phishing-safe is a “scam” or that 100% phishable TOTP or MFA over Signal (??) is better is detached from reality and harmful. I wish InfoSec didn’t reward these antics, but it does.

      1. 3

        You pretty much summarized why I didn’t submit it earlier this morning. The issue is apparently also just in one specific server side implementation?

      2. 1

        Actually, it’s kinda sorta remotely possible to phish webauthn for arbitrary domains on Chromium, but you can only attack Google employees and you need to compromise a Google domain first. (Relevant code)

    5. 8

      Designing, building, managing, and growing an Open Source project demonstrates all the skills required of a Senior Software Engineer. That means maintainers can access $150k–300k++/year compensation packages.

      Just between us, that’s 3x-6x going rate east of Germany. Even more east of Poland.

      A lot of open source apparently happens because in the US there is enough capital for people to take very long sabbaticals and/or for companies to take in couple more people than strictly necessary. I am very happy for it, because that means people around me in Czechia here can produce high quality solutions for the local needs even on measly $40k.

      On the other hand, most of local talent gets vacuumed up by transnationals who pay $50k or even $60k and then used to very inefficiently deliver services to the US market, slaving on legacy systems with zero passion, just to pay for their very expensive mortgage.

      If you really want to make sure that open source you so desperately depend on survives, you can potentially sponsor 3-6x the amount of well-qualified maintainers in the central/eastern Europe for the same price.

      1. 7

        people around me in Czechia here can produce high quality solutions for the local needs even on measly $40k.

        Yes, life in Czechia is also cheaper than in most US cities, and there are well educated engineers everywhere. :)

        If you really want to make sure that open source you so desperately depend on survives, you can potentially sponsor 3-6x the amount of well-qualified maintainers in the central/eastern Europe for the same price.

        The idea isn’t to overtake projects with cheap labor, but to fund people who created something in the first place.

        1. 3

          The idea isn’t to overtake projects with cheap labor

          Which feeds back to the issue. US / western EU engineers will produce some more open source much more expensively while engineers elsewhere will waste their talents cheaply taking care of legacy DHL or Accenture clients’ software stacks.

          Also:

          who created something in the first place

          There is no need to fund anyone after they did something, is there? For that you could have some sort of award system. Allocate some funds for projects that especially helped you in a given year and then let your developers vote on distribution.

          1. 4

            Software is never done, it needs regular maintenance, hence the funding suggestion.

          2. 2

            Which feeds back to the issue. US / western EU engineers will produce some more open source much more expensively while engineers elsewhere will waste their talents cheaply taking care of legacy DHL or Accenture clients’ software stacks.

            Well. Cheap labor isn’t about software or open-source. Open-source software developers are just realizing that they’ve been working for free, and looking for a way out. If you tell them “there’s no way out, we’ll just replace you with cheaper coders from $somewhere”, I’m not sure they’ll like your solution.

            Do you think that it all comes down to “throw money at the project and find the cheapest way to get the work done”? That’s how we get maquiladoras where I live (and elsewhere).

            There is no need to fund anyone after they did something, is there? For that you could have some sort of award system. Allocate some funds for projects that especially helped you in a given year and then let your developers vote on distribution.

            Sarcasm. :)

            Yes, that could work. Managing funds require a legal entity in most parts of the world though, and raises a lot of challenges based on organizing people, making them agree on stuff, paying them, taxes, legal obligations (like health insurance), etc. That’s also why “funds” are easier to share in the context you already know.

      2. 2

        Yeah, I find the pay suggestion interesting because of how widely varying pay is around the world. The author also says:

        you should target figures between 25% and 100% of a SWE compensation package

        $150k/yr is 100% of a senior SWE where I live in the US (Ohio), which makes the high end of his range 200%. For this Google developer author, $150k/yr is probably 15-25% of a senior SWE.

        There are just as competent engineers in areas outside of California, and I’m sure the engineers here in Ohio or where you live in Czechia are just as good.

      3. 2

        Just between us, that’s 3x-6x going rate east of Germany. Even more east of Poland.

        Compare phk’s funding drive. More than 15 years old by now, but he wanted a little less than $60k - and he’s a uniquely skilled software engineer by most people’s standards: https://people.freebsd.org/~phk/funding.html

        The FAANGS - for now - make such profits that they have the luxury of ignoring the labour market, and paying 3-6x rates for engineers who have demonstrated some unique talent - like authoring an open source project, or living in silicon valley. We’ll see how long that lasts.

        1. 2

          The numbers in the article are not FAANGS SV numbers, see the footnote https://words.filippo.io/pay-maintainers/#fn1

          1. 3

            True, but it’s the 90th percentile developer salary of three cities with a notoriously high cost-of-living, which is then treated as the lower end of the range. I think it’s no surprise that the author works for a FAANG - very few people would find this kind of calculation indicative of actual salary expectations.

            1. 4

              Author here 👋 Note that I use the 90th percentile of all SWE salaries as indicative of the salary of a senior SWE. In my experience, $300k is actually very wrong on the low side for the non-FAANG NYC market, so that suggests the other numbers are conservative too. (Plausible explanations include: less than 10% of engineers are senior, and senior engineers don’t post their salary to levels.fyi. I believe both are true.)

              It’s true that NYC, Berlin, and London have high cost of living, but salary is not based on cost of living. That’s one of the most amazing things companies managed to convince engineers of. Do you think lawyers get paid based on cost of living? Post-pandemic, the market is flooded of remote positions that will pay the same in Berlin, Hamburg, and Leipzig.

              1. 3

                Author here 👋 Note that I use the 90th percentile of all SWE salaries as indicative of the salary of a senior SWE. In my experience, $300k is actually very wrong on the low side for the non-FAANG NYC market, so that suggests the other numbers are conservative too. (Plausible explanations include: less than 10% of engineers are senior, and senior engineers don’t post their salary to levels.fyi. I believe both are true.)

                I have my doubts about levels.fyi being a representative sample of the market; I’ve noticed that very “boring” companies that are nevertheless major employers (like Atos, CapGemini in the EU, Cognizant in the US) don’t seem to have a lot of data points. Not to mention the complete absence of the thousands of small businesses that employ the bulk of the work force.

                You’re also implicitly defining “senior” to mean someone who commands a salary in the 90th percentile - in the context of this discussion, I’m fine with that.

                I think your point about salary makes sense if you read it as “Some OSS maintainers make north of $500k/yr - don’t expect them to quit their day job for $50k/yr.” $1000/month is a “thank you” in the US, but - last I checked - still decent money for a junior software developer in Eastern Europe. Precisely because it’s a world-wide job market, and there’s a wide divergence in salaries, it makes sense to calibrate people’s expectations.

                For most companies, it makes a lot more financial sense to pay their employees to work on the project (giving them the added benefit of control and in-house expertise) than it is to pay 90th percentile money to someone external. Which is the very thing you’d like to avoid.

    6. 10

      I think about dependency ecosystem dynamics a lot, because my job is making Go applications secure, not simply making Go itself technically secure.

      K8s is thankfully an extreme case, and not necessarily an example of best practices given its size, but there are also a few things that help mitigate the author’s concern, and a few things in the pipeline that should help further.

      Worst-case scenario, something malicious could eventually appear in these and make it’s way up into other programs.

      First and most importantly, when a dependency is only a module dependency (like this test dep of a dep) and not a build dependency, it can’t get code into the build, so it can’t become malicious and actually cause damage. All it can do is raise the version of other dependencies, but not replace them (this is why replace directives don’t work from outside the main module!). A way to recognize these is that they have only one line in the go.sum (because their source is never downloaded), and don’t end up in the vendor directory.

      The go mod why command was providing me with nothing.

      Dependencies that don’t affect the build don’t show up on go mod why, although I think they will show up in go mod why -m, which works at the module level rather than package.

      Starting in Go 1.17, a lot of these dependencies that don’t affect the build will get dropped even from the module dependency tree, thanks to lazy modules, a large refactor of how modules are loaded that I am really looking forward to.

      Relatedly, we are working on a first-class vulnerability tracking story, so that known vulnerabilities that affect the build will be easy to identify and remediate, even in a large tree like k8s.

      Finally, I am thinking about what tooling and documentation we could provide to help library and application authors actively manage their dependency trust tree. Maybe a GitHub Action (and CLI tool) to show the changes in dependencies that affect the build? A web UI to explore the graph, filtering for dependencies that affect the build, and highlighting trust domains? I’d love to hear what people would find useful here.

    7. 29

      Before jumping on the “don’t be evil eh” bandwagon, here’s what reading the post and the next two emails in the thread would have clarified:

      1. Any browser that meets a series of guidelines (like, not headless, supports JavaScript, not based on Node.js, …) is allowed
      2. There is a header to check in advance, and they did, and WebKit still works (https://lists.webkit.org/pipermail/webkit-dev/2020-November/031606.html)
      3. It looks like this might affect specifically OAuth flows

      This change makes perfect sense to me. I always cringe when an app pops open an embedded browser and asks me to login with Google because:

      1. I have to trust they did not fuck up certificate verification
      2. There is no address bar to check where the hell I’m typing my password into
      3. I am already logged in from my main browser why are you making this harder and less safe for me!

      Disclosure: I work for Google on completely different stuff.

      1. 8

        Yeah, the next message (by thread): [webkit-dev] Starting January 4, 2021, Google will block all sign-ins to Google accounts from embedded browser frameworks:

        Oh, I missed a very important point. There is a header we can use to 
        test: Google-Accounts-Check-OAuth-Login:true. I will try to figure out 
        how to hack up the libsoup backend to send that header with all 
        requests and see what happens....
        
    8. 12

      This “0-day” is complete nonsense. The article conflates active and passive fingerprinting, and relays and bridges, and seems to misunderstand a lot of the Tor security model.

      What they describe is a passive fingerprinting attack against public relays. There’s a public list of their IPs, so this is not at all an issue. The author gives some nebulous explanation about blocking 3000 IPs being too hard on busy networks, which makes no sense.

      Anyway, there is no way for the Tor TLS connections to look perfectly like a browser’s one. Many have tried but trust me when I say you can’t parrot a different TLS stack down to the details.

      The Tor project knows about both of these issues, and the solution is bridges and pluggable transports. Bridges are relays which are NOT publicly listed, so they can’t be blocklisted. Pluggable transports are wrappers that make the traffic harder to fingerprint and block. The current generation simply makes the traffic look entirely random.

      What the Tor project seems to be trying to tell them, is about a know active fingerprinting attack against bridges, where bridges also have an open port that serves the classic Tor protocol. An active attacker (like the GFW) can port scan an IP to determine if it’s a bridge. Needless to say, this is more expensive than blocking 3000 IPs.

      What I expect happened but is not reported here is that the Tor project tried to explain the concept of bridges to the author, leading to the discussion of the latter attack.

      Sadly, the author took that as a confirmation of their passive attack, which has nothing to do with the ORPort issue. This is a common danger of running a bug bounty program: as soon as you acknowledge an issue, even if tangentially related to the report, some researchers will take it as a wholesale validation of their point of view, and will demand the report be fixed like they think it should, even if it doesn’t make sense.

      1. 3

        It really feels to me like this author is a fan of armchair quarterbacking. Everything everyone else does that differs from his viewpoint is absolutely wrong, yet he offers no tangible solution to fix those things only he sees as problems. Troll-ish tactics/behavior.

      2. 2

        Yeah I thought that the relay blocking 0-day was mostly nonsense. Reading about how angry the author got about the scrollbar width fingerprinting issue being ignored made me think that something juicy or exciting was to come.

    9. 4

      You know how there are “lies, damned lies, and statistics”? Even anecdotal data reports can be skewed by back-pressure. Re Golang, Filippo writes “Anecdotally, all interesting reports come unencrypted”.

      When I reported the problems of misuse of the Golang SSH library and almost all users skipping hostkey verification, I sent that report to the Go folks via PGP-encrypted mail. Brad Fitzpatrick asked me to resend cleartext, so after checking mail-server logs to be sure it was going out with TLS between my server and Google’s, I did.

      So sure, all the interesting reports do come in unencrypted, after you ask the senders of encrypted reports to resend. That’s still rather misleading.

      (Since I’m going to be asked for citations anyway, and balancing that against self-promotion, let’s just provide the cites here: https://bridge.grumpy-troll.org/2017/04/golang-ssh-security/ and https://bridge.grumpy-troll.org/2017/04/golang-ssh-redux/)

      1. 2

        For what it’s worth, I joined the team in early 2018 and can only speak to anecdotes from the past 2.5 years. Since, we’ve never asked a reporter to resend in plaintext, I don’t remember any encrypted reports that led to a security release (although I might be mistaken), and I’m nearly certain we never had a report we felt required ongoing communication via PGP encrypted mail.

    10. 24

      There is no. winning. against a persistent unsandboxed local attacker. This comes up regularly disguised in different ways, like as reports that setting environment variables can compromise TLS, etc.

      If someone has full long-lasting control of the browser (like someone that can use a remote desktop thing to install an off-the-shelf rootkit) nothing will save you. Not 2FA, as they will just wait for you to use it or phish it from a real Google page; not a sync passphrase, as the browser needs it in memory to use the passwords and the attacker can just take it from there; not 1Password, as even if the vault is locked and their memory scrubbing implementation is perfect (which is unlikely), the attacker can just wait for you to type the password.

      This is not security nihilism (nothing is perfectly secure, etc.), this is just threat modeling. It’s important not to spend complexity and user tolerance on unwinnable battles.

      You could say “but this attacker looks incompetent so maybe it would have stopped them”, but they had access to the victim’s email, presumably they would just have reset any passwords they couldn’t find.

      Refreshing sessions still makes sense against opportunistic unsophisticated local attackers, like a domestic partner just sitting at the laptop that was left unlocked, but even then if they use that privilege to install malware it’s game over. (This is way less true on mobile platforms, because other apps can’t take control of the browser or mail app, and something we really should be doing better on at the platform level, but not a winnable battle on desktop at the web service level for now. Also remember this before criticizing an OS for tightening isolation.)

      (Disclaimer: I work at Google but have no particular insight into the account security team’s thinking and decisions.)

      1. 2

        This is way less true on mobile platforms, because other apps can’t take control of the browser or mail app

        Agreed. However this doesn’t mitigate the risk completely, for instance an accessibility service on Android can do a lot.

        Adding a TLS CA to the system or enabling a nasty VPN service displays something, but I’d guess most users will just ignore it.

    11. 9

      Since this loads all of its code live from an HTTPS website through the browser, its security boils down to the security of TLS and the WebPKI. On top of that the user is also trusting the website operator, the openpgpjs codebase, and the proof verification implementation.

      Instead of verifying a file through this, one could download it from an HTTPS URL with equivalent or better security. For encryption, the same holds for an HTTPS form.

      1. 4

        Maybe I misinterpret the site author’s intent but for me this looks like just a convenience profile page + some utils. The same argument holds for Keybase that also has verify and encrypt pages. So while the “browser crypto is broken by design” point is valid it’s actually nothing new. I do admit that the author could clarify this better on the page.

        Or are you pointing out that there is no standalone app like Keybase has for the non-browser usage?

    12. 40

      Bernstein’s response is something. Here’s a story.

      The fastest assembly implementations (amd64-xmm5 and amd64-xmm6) of Salsa20 available from its homepage still have a vulnerability such that they will loop and leak plaintext after 256 GiB of keystream.

      This was reported to the Go project last year because our assembly was derived from it. We fixed it in Go, and published a patch for the upstream.

      He declared it WONTFIX because there is an empty file called “warning-256gb” in a benchmarks tarball hosted elsewhere. He tweeted we should have seen it. The file was added 4 years after the Go port was made.

      1. 16

        Filippo! Thanks for your two recent posts on OpenSSH keys from U2F tokens. It’s been nice to see you up to yet more interesting crypto lately, in addition to all the public go crypto work.

        You probably know as much (it was all discussed here a few months ago), but qmail itself is a similar long-arc story and a lost opportunity. Even today it has one of the better security designs in a mail server, and back then, it inspired a series of really great patterns and tools, such as those that ship with runit. But, DJB was never willing to take on a traditional open source maintainer role, nor to let anyone else do that with the upstream source. So it never was allowed to ship as distro-specific binary packages, it never got updated to do SMTP auth, it required outside patches to work with linux because of a war on errno.h, etc. (Even so, Artifex.org used it for roughly fifteen years before moving finally to OpenSMTPD. . . and I never had to scramble to patch a CVE for it, unlike the latter.)

        So I feel conflicted about it all. On the one hand, DJB’s done more for open cryptography than just about anyone, he’s done fairly reliable software development, and he hasn’t gone off into some sort of St. Ignutius weird place like Richard Stallman, either. But on the other, does it really take that much generosity of spirit to admit fault and accept a patch? If Linus can learn to be less of a jerk on email, then maybe a cryptographer can learn to accept bug reports for the helpful things that they are.

      2. 7

        djb’s personality is the worst thing about djb’s software.

    13. -1
      1. 19

        I am interpreting presentation as the artwork, apologies if that’s not what you meant.

        You wouldn’t know because you passed on it, but it’s a well researched, interestingly opinionated piece that discusses the nuances of a cryptographic construction in an accessible way, while acknowledging what’s good about the design. And it has some cute drawings. We need more of this, and less hot takes on how this or that is bad and dumb, IMHO.

        I am surprised and disappointed to see a comment on Lobsters which is gratuitously negative, dismisses a point of view because of the innocuous identity expression of the author, and tries to enforce the stereotype of technical content having to be inexpressive and boring and dry and nerdy. I have downvoted this as Unkind and I hope you’ll take the time to think about whether that was appropriate.

        1. 6

          The original comment has since been removed so I can’t see if this was acknowledged, but the author’s about page gives some (very genuine) rationale for the format.

          1. 10

            I didn’t mean that GP dismissed the article directly because of the author’s identity (but I see that’s what it looked like, sorry about that) and I do believe they found the artwork distracting.

            However the artwork is (presumably) an expression of the author’s identity, and not making the effort to scroll past a couple images makes the community less inclusive. Commenting negatively in a place the author is likely to see it is also sure not to make them feel welcome. Said another way, if we only accept technical content presented to the non-technical tastes of the majority, we are certain to keep having a community with little diversity.

            Likewise with emoji, some people find that they help them learn and communicate, some find them distracting. Working past that is key in including different folks with different styles. It’s especially important for the people in the majority to make an effort, because underrepresented people already have to adapt most of the time (or they just get worn down, feel unwelcome, and leave, at a loss for everyone).

            1. 1

              This is no different from dismissing an article because the author has pictures depicting attractive females in a way that women claim makes them feel uncomfortable or excluded, other than in what the specific sort of sexualized images associated with the article are, and their political valence.

              1. 4

                I don’t see any sexualized images in the linked article.

            2. -7

              Gonna be honest here, the artwork was distracting from the get go so I blocked them at the first image with ubo. I try to not care about orientations or whatever political/sexual/racial/feeling things people are driven to make a part of their identity. I just find it annoying that the author is using an article featuring a technical subject I’m interested in reading about in order to try and normalize his particular psychological baggage. I don’t care.

              With that said, it’s a solid article even though if you’re stuck using the WebCrypto api for any reason AES-GCM is the best option you have, it’s also not difficult to get right - Openpgp.js code is pretty easy to follow for implementing the proper protocol. The best, simple option if you’re developing for the web and have more control is libsodium.js which uses chacha and salsa.

              1. 6

                Considering all the websites with obnoxious, pointless photo headers, scrolling past cute fursona art is a welcome change.

              2. 3

                I just find it annoying that the author is using an article featuring a technical subject I’m interested in reading about in order to try and normalize his particular psychological baggage.

                What particular psychological baggage are you referring to?

                I don’t care.

                When you say “I don’t care” do you really mean to say “I don’t want to see it because it bothers me and therefore I care a lot but in a negative way”?

                1. 0

                  What particular psychological baggage are you referring to?

                  That being gay or furry or whatever is something to be proud about or guilty about or whatever. It’s no different from liking a certain tv show that other people don’t. It has nothing to do with me, do whatever you want.

                  When you say “I don’t care” do you really mean to say “I don’t want to see it because it bothers me and therefore I care a lot but in a negative way”?

                  More like “I don’t care to take in random furry imagery when I’m concentrating on a technical topic.” Random pictures of babies, cats or stupid gifs have annoyed me the in the same fashion. Stop getting offended.

                  1. 5

                    That being gay or furry or whatever is something to be proud about or guilty about or whatever. It’s no different from liking a certain tv show that other people don’t. It has nothing to do with me, do whatever you want.

                    I’ll take your word for it that you feel like it has nothing to do with you, and therefore everyone should do whatever they want. That’s a generally fair attitude to have, and not too dissimilar to how I approach people whose interests don’t align with mine.

                    For example, one of the repeat Pwn2Own winners goes by “Pinkie Pie”, and is (at least as far as I can tell) a member of the My Little Pony fandom (a.k.a. “bronies”). They’re known to publish brilliant, novel exploitation techniques under that alias.

                    Now, I’m not interested in My Little Pony or its fandom. When I see an article about Pinkie Pie winning Pwn2Own, I don’t take to the comment threads to express annoyance at their celebrating an interest they hold that I do not. I let them do whatever they want, since it has nothing to do with me.

                    Stop getting offended.

                    I’m not offended. I just found the remarks about the author’s “particular psychological baggage” odd (and out of place for this forum).

                    Also: If the art bothers you enough to use uBlock Origin to hide all the images and then tell everyone you did… doesn’t that imply that someone other than me is offended?

                    1. -1

                      It implies nothing of the sort, and Pinkie Pie is god damned admirable.

                  2. 4

                    Do you generally start/join off-topic tangents about those? What prompted you to draw your line here?

                    1. 1

                      No.

                      Because I read jakob’s comment then checked out the blog’s about page. He’s including the images with a specific non-humorous, non-knowledge transferring purpose in mind. That annoys me more than usual.

                      1. 3

                        See, I read it a whole other way. The art transfers knowledge to me: “I recognize this person!”

                        If you don’t like it as identity affirmation, treat it as branding. A lot of infosec furries who might have skipped it will take a deeper look because they recognize soatok where they might only skim something on a bland stock theme. There are a lot of furry/furry-adjacent people who care about this topic. Some will weigh their familiarity with the author in determining how deep a read they give it.

                      2. 2

                        He’s including the images with a specific non-humorous, non-knowledge transferring purpose in mind. That annoys me more than usual.

                        The author addresses this in the about page:

                        The context it’s asked in is usually, “Who cares about [aspect of identity], shouldn’t your blog be about [technical content divorced of humanity]?!”

                        […]

                        Second, representation matters.

                        People who feel nervous being open and authentic about who they are (especially junior developers) will feel even more pressure to remain hidden (to their own detriment) if no one is relatable to them.

                        So, I promise, I’m not being loud about my identity or interests to spite you. I’m doing it to comfort people like me. And that distinction matters.

                        1. 0

                          I noticed. It’s fine that the message is crafted to inspire or comfort a specific audience but the way it’s done means there’s more useless content to filter out for those that aren’t in the target group. The distinction matters little when the effect is the same.

                          1. 3

                            For almost the entire course of human history, people in the “out groups” have had to conceal messages intended for them so as not to risk ostracization or worse by the “in group”. I’m sad to see that some people still think that has to be the norm.

                            1. -2

                              You’re adding to a reply chain that you don’t know the context of with your incredibly wrong and generic comment. Try harder, please.

                              There will always be in groups and out groups. If you wanted to simply sub in an example. Would you consider all of the messages that you send your family or friends to be free of anything that the general public would not ostracize you for?

                              Everyone hides. It’s normal.

                              1. 4

                                I have read every comment to this submission.

                                Try harder, please.

                                I will indeed try harder to be more understanding of other people’s experiences and personalities.

                                Please take your own advice to heart.

              3. -1

                This is a meta comment but I don’t consider anything I’ve said to be trolling as they’re not intentionally inflammatory. This reply chain was already off the rails so any down votes indicating troll are pretty wrong. Incorrect or troll, no, they are truthful opinions. Unkind or off-topic, sure.

    14. 8

      Strongly endorsed.

      We go to great lengths to uphold the Go 1 Compatibility Promise, ensuring Go programs keep compiling and running correctly with the latest release. Backwards incompatible changes are simply not contemplated.

      (We are in the early days of a “Go 2” effort which might introduce backwards incompatible changes with module-level opt-in, which should allow forward progress without breaking existing code or fracturing the ecosystem.)

      This has a lot of costs, mostly borne by the team (as opposed to community developers), but a critical dividend is the fact that we can mostly assume people are using recent versions.

      I recommend assuming the Go packaged with your (non-rolling) distribution is part of the machinery that builds other packages, not something for you to consume, and to install an up to date Go if you directly use the go tool.

      Finally, with my security coordinator hat on, we currently only accept vulnerability reports and publish security patches for the latest two release, so while the security teams of distributions might be doing backports it’s unclear if anyone is looking for or acknowledging vulnerabilities in unsupported releases.

      P.S. it also occurs to me now that if distributions modify the Go compiler or standard library (for example to backport fixes) the builds they generate won’t be reproducible by others.

      1. 3

        Thank you for your work! There are way too many components that make things easier for themselves and create lots of work for their users. I find that a terrible trade-off and I really appreciate how Go does it better.

        A big reason that I can easily use the latest Go is that I only have to update the SDK. I don’t have to also roll out a new runtime to all my deployment targets.

    15. 4

      SetDeadline on a net.Conn or os.File will cancel the read or write, and deadlines can be reset to resume later, unlike Close. Go 1.15 is introducing os.ErrDeadlineExceeded to make it easy to distinguish a deadline-induced error.

      1. 1

        Hm, then I’m starting to think if it would make sense for me to try and use some kinda “DeadlineReader/Writer” interfaces everywhere now

    16. 4

      Just to check, does this mean we can skip passphrases for the ecdsa-sk keys? ssh-keygen still asks for one, but assuming I’m comfortable with the security model of Yubikey possession == access, is a passphrase still necessary?

      1. 3

        Yep, even without a passphrase for most tokens it’s still private key file + hardware token. With a passphrase it should be private key file + passphrase + hardware token. With resident credentials it’s just hardware token.

        1. 1

          Awesome, thanks for confirming Filo!

      2. 2

        To answer my own question: all I had to do was read past the first half of the article.

        • Without a passphrase, authentication requires possession of at least the the hardware token. If the hardware token is implemented well, it also requires the possession of the private key file.
        • With a passphrase, authentication might require the passphrase and private key file. If the hardware token is implemented well, authentication requires the passphrase, hardware token, and private key file.

        So in both cases access implies possession of the hardware token, and if I’m comfortable with that being sufficient then I’m free to skip adding a passphrase.

    17. 2

      On macOS, should you always be installing OpenSSH with Homebrew to get this and other updates? I don’t normally install OpenSSH, but I do use Homebrew extensively for other things.

      1. 3

        macOS is at OpenSSH 8.1 so it doesn’t support it and we don’t know yet if they’ll build it with the native support.

      2. 2

        Unfortunately installing from Homebrew doesn’t replace the system-provided ssh-agent, so the agent started automatically by the OS won’t be able to load the ecdsa-sk key type.

        SIP won’t let you modify the /usr/bin/ssh-agent binary nor edit the launchd plist. In theory you could create a new launchd service to run the brew-installed ssh-agent but then you lose Keychain support for the passphrase. Depends if that’s important to you.

        1. 2

          Oh, fascinating. macOS does indeed run a default on-demand ssh-agent, and the socket path is magically passed in the environment of login shells. I did not know this! Kinda surprised that Homebrew would ship its own ssh-add by default when again by default it would talk to the system ssh-agent. I wonder what the backwards compatibility of that protocol is.

          https://opensource.apple.com/source/OpenSSH/OpenSSH-235/openssh/ssh-agent.c.auto.html

          The good news is that if I read this right we’d be able to load the Homebrew FIDO2 middleware into the system agent if they don’t build it.

          The bad news is that Apple squatted on “ssh-add -K” for keychain support, and now that’s a real option for loading resident keys 🤷‍♂️

        2. 1

          The did update the OpenSSH version that originally came with Catalina, 7.9 to 8.1 now. So let’s hope that there will be a 10.15.5 with OpenSSH 8.1.

    18. 17

      In the docs for http.Transport, . . . you can see that a zero value means that the timeout is infinite, so the connections are never closed. Over time the sockets accumulate and you end up running out of file descriptors.

      This is definitely not true. You can only bump against this condition if you don’t drain and close http.Reponse.Body you get from http.Clients, but even then, you’ll hit the default MaxIdleConnsPerHost (2) and connections will cycle.

      Similarly,

      The solution to [nil maps] in not elegant. It’s defensive programming.

      No, it’s providing a constructor for the type. The author acknowledges this, and then states

      nothing prevents the user from initializing their struct with utils.Collections{} and causing a heap of issues down the line

      but in Go it’s normal and expected that the zero value of a type might not be usable.

      I don’t know. They’re not bright spots, but spend more time with the language and these things become clear.

      1. 4

        If you really want to prevent users of your library from using the {} sntax to create new objects instead of using your constructor, you can choose to not export the struct type & instead export an interface that is used as the return value type of the constructor’s function signature.

        1. 10

          You should basically never return interfac values, or export interfaces that will only have one implementation. There are many reasons, but my favourite one is that it needlessly breaks go-to-definition.

          Instead, try to make the zero value meaningful, and if that’s not possible provide a New constructor and document it. That’s common in the standard library so all Go developers are exposed to the pattern early enough.

          1. 2

            Breaking go-to-definition like that is the most annoying thing about Kubernetes libraries.

        2. 4

          That would be pretty nonidiomatic.

        3. 1

          Yea this is a good approach sometimes but the indirection can be confusing.

    19. 10

      Looks like this frontend was very out of date, targeting Go 1.5.

      There’s a much more maintained Go frontend for LLVM called gollvm: https://go.googlesource.com/gollvm/

    20. 9

      I personally prefer to stick with nginx after having seen how the Caddy dev handled this security issue. But that’s just me.

      1. 8

        His responses when a LE outage meant that no-one could restart (or start, if stopped) Caddy despite having valid certs convinced me it’s not a viable tool.

        “Automagic TLS Certificates” which really means “an ACME client you don’t have direct control over” is not really a feature I’d see as valuable for anyone with the slightest bit of operator/admin experience.

      2. 7

        That’s barely a “security vulnerability”, and a lot of accusatory debate, including on unrelated matters like the EULA, the kind that give us security people a bad rep with open source maintainers.

      3. 4

        I really only skimmed it, but what didn’t you like about it? They seemed pretty open to discussion and it seems to have been resolved?

        1. 1

          It was more of the developer’s attitude towards the security issue itself—they straight up dismissed it.

      4. 2

        If I understand this correctly it’s rather a leakage of public certificates (e.g. for other subdomains) available on the same server.