1. 162
  1. 39

    Instead of disabling the go proxy and sumdb for all modules (even the ones not hosted on sourcehut), I recommend the following workaround instead:

    $ export GOPRIVATE=git.sr.ht
    $ go get # will use goproxy, except for modules hosted at git.sr.ht
    

    Documentation: https://goproxy.io/docs/GOPRIVATE-env.html

    1. 6

      I don’t understand how it is different from the workaround they recommend?

      1. 20

        the post has been updated by its author: https://news.ycombinator.com/item?id=34311022

        1. 2

          Ah, that explains it, thanks! :)

    2. 18

      Google ’s justification, that the cost is too high, does not seem entirely honest from the outside. Their proxy is broken if it needs several full git clones daily.

      Sourcehut didn’t apply for delisting when offered, which would have led them to the same state some 2 years ago… I guess they preferred serving the traffic all this time when it was already clear that Google wouldn’t lift a finger.

      Both parties here show a great deal of entitlement, which is not helping when you’d want to root for the underdog.

      1. 36

        Do you mean SourceHut’s? SourceHut is complaining that Google’s proxy is causing them high cost.

        And how the hell is SourceHut showing “entitlement” here? Not dosing another server with your huge enterprise pipe pointlessly cause you can’t be bothered to code the most basic of internal synchronization is not entitlement, it’s very basic etiquette.

        1. 9

          They offered to stop 2 years ago and he refused out of principle. You may care or not care about the ideology he’s espousing but it is disingenuous to say they didn’t offer to solve the problem in a way that didn’t harm end users.

          1. 15

            They offered to stop DDOSing the official instance only, not other instances, so that wouldn’t really solve the problem.

            1. 4

              Which is irrelevant to Drew’s problem. And the same offer is available to those other instances.

              1. 22

                You should not have to opt out of denial of service attacks.

                When you get immense traffic and the source of the traffic offers to exempt you specifically from the massive demand their bad code is creating for no good reason, then it is morally righteous to take a stand and say “no. fix your shit.”

                1. 1

                  Sourcehut did not suffer from a DOS attack from Google though. They complain about excessive traffic, and have been able to sustain it for at least 2 years.

                  I’m sure they deserve being called “morally righteous” though. 🤣

                2. 9

                  How dare he want a solution that would benefit all people using his software and not just him! So entitled!

            2. 7

              When someone [Google] offers you a solution to a particular issue, but you refuse it for reasons, and instead require them to fix the issue in a different manner that better suits you. I call that entitlement.

              I’d add self-righteousness to the mix for ranting about how bad they’ve been to you, and the rest of the Internet, and how you’re forced to take a solution that may be detrimental to your users, instead of trying to find a middle ground.

              Luckily, sr.ht managed to implement account deletion (it only took 2 years).

              1. 23

                If someone was periodically breaking into your house and offered to stop if you write your address on a list, would you consider that fair?

                1. 4

                  Is that a fair analogy though? I’m not trying to defend Google, they have a problem and are not solving it, but internet sites and houses aren’t really the same, especially in this case where they both run publicly available services.

                  Perhaps a business analogy would work better.

                  If you run a small shoe store, with only your spouse and you working, and Nike kept calling and asking if you have time for repairing some shoes, “just in case someone needed some repairs”, that would be closer I think? I’m probably just overthinking it.

                  Personally I agree that yes, the Google is acting entitled here, and Drew, he’s not exactly entitled, but he is ignoring reality.

                  I know I’m mostly wrong, but it does look like this from the sidelines: “Sorry we’re doing an inconvenience. Want us to avoid bombing your site?” - “No, I don’t, I want you to accept the same ideology as I have.”

                  1. 12

                    When “the ideology” is “you should not bomb sites”, I think this is eminently reasonable.

          2. 36

            [Speaking with no hat on, and note I am not at Google anymore]

            Any origin can ask to be excluded from automatic refreshes. SourceHut is aware of this, but has not elected to do so. That would stop all automated traffic from the mirror, and SourceHut could send a simple HTTP GET to proxy.golang.org for new tags if it wished not to wait for users to request new versions. That would have caused no disruption.

            This is definitely a manual mechanism, but I understand why the team has not built an automated process for something that was requested a total of two times, AFAIK. Even if this process was automated, relying on the general robots.txt feels inappropriate to me, given this is not web traffic, so it would still require an explicit change to signal a preference to the Go Modules Mirror, taking about as much effort from origin operators as opening an issue or sending an email.

            Everyone can make their own assessment of what is a reasonable default and what counts as a DoS (and they are welcome to opt-out of any traffic), but note that 4GB per day is 0.3704 Mbps.

            I don’t have access to the internal mirror architecture and I don’t remember it well, nor would I comment on it if I did, but I’ll mention that claims like a single repository being fetched “over 100 times per hour” sound unlikely and incompatible with other public claims on the public issue tracker, unless those repositories host multiple modules. Likewise, it can be easily experimentally verified that fetchers don’t in fact operate without coordination.

            1. 82

              Sounds like it’s 4 GB per day per module, and presumably there are a lot of modules.

              The more I think about it, the more outrageous it seems. Google’s a giant company with piles of cash, and they’re cutting corners and pushing work (and costs) off to unrelated small and tiny projects?

              They really expect people with no knowledge of Go whatsoever (Git hosting providers) will magically know to visit an obscure GitHub issue and request to be excluded from this potential DoS?

              Why is the process so secretive and obscure? Why not make the proxy opt-in for both users and origins? As a user, I don’t want my requests (no, not even my Go module selection) going to an adware/spyware company.

              1. 3

                It’s a question of reliability. Go apps import URLs. Imagine a large Go app that depends on 32 websites. Even if those websites are 99.9% reliable (and very few are) then (1-.999**32)*100 means there’s a 3.15% chance your build will fail. I think companies like creating these kinds of problems, since the only solution ends up yielding a treasure trove of business intelligence. The CIA loves funding things like package managers. However it becomes harder to come across looking like the good guys when you get lazy writing the backend and shaft service operators, who not only have to pay enormous egress bandwidth fees, but are also denied any visibility into who and how many people their resources are actually supporting.

                1. 3

                  Go apps import URLs. Imagine a large Go app that depends on 32 websites. Even if those websites are 99.9% reliable (and very few are) then (1-.999**32)*100 means there’s a 3.15% chance your build will fail.

                  I do hope they do the sane thing and only try to download packages when you mash the update button, instead of every time you do yet another debug build? Having updates fail from time to time is annoying for sure, but it’s not a “sorry boss, can’t test anything today, build is failing because left pad is down” kind of hard blocker.

                  1. 4

                    Go has a local cache and only polls for lib changes when explicitly told to do.

                    1. 2

                      Thanks. I was worried there for a minute.

                    2. 2

                      If you have CI that builds on every commit and you don’t take extra steps to set up cache for it, you will download the packages on every commit

                      1. 1

                        Ah… well I remember having our CI at work failing semi-infrequently because of a random network problem. Often restarting it was enough. But damn was this annoying. All external dependencies should be cached and locked in some way, so the CI provides a stable, reliable environment.

                        For instance, CI shouldn’t have to build a brand knew Docker image or equivalent each time it does its thing. It should instead depend on a standard image with the dependencies we need and everyone uses. Only when we update those external dependencies should the image be refreshed.

                    3. 1

                      I have a lot of sympathy with Google here. I am using vcpkg for some personal projects and hit a problem last year where the canonical source of the libxml2 sources (which I needed as an indirect dependency of something else) was down. Unlike go and the FreeBSD ports infrastructure, vcpkg does not maintain a cache of the distribution files and so it was impossible to build my local project until I found a random mirror of the libxml2 tarball that had the right SHA hash and manually downloaded it.

                      That said, 4 GiB/day/repo sounds excessive. I’d expect that the update should need to sync only when it sees a new revision and even if it’s doing a full git clone rather than an update, that’s a surprising amount of traffic.

                  2. 88

                    Not considering an automated mirror talking HTTP as “web traffic” a thus shouldn’t respect robots.txt is definitely a take. And suggesting that “any origin” write a custom integration to workaround Go’s abuse of the git protocol? Cool, put the work on others.

                    And according to the blog post, the proxy didn’t provide a user agent until prompted by sr.ht. That kind of poor behaviour makes it hard to open issues or send emails.

                    Moreover, I don’t think the blog post claimed 4Gb/day is a DoS. It said a single module could produce that much traffic. It said the total traffic was 70% of their load.

                    No empathy for organisations that aren’t operating at Google scale?

                    1. 10

                      Not considering an automated mirror talking HTTP as “web traffic” a thus shouldn’t respect robots.txt is definitely a take.

                      No, I am saying that looking at the Crawl-delay clause of a robots.txt which is probably 1 or 10 (e.g. https://lobste.rs/robots.txt) is malicious compliance at best, since one git clone per second is probably not what the origin meant. Please don’t flame based on the least charitable interpretation, there’s already a lot of that around this issue.

                      1. 34

                        For what it’s worth, 1 clone per second would still probably be less than what Google is currently sending them. Their metrics are open, and as you can see over the last day they have served about 2.2 clones per second, and if we assume that 70% of clones is from Google, it comes out to roughly 1.6 clones per second.

                    2. 27

                      I think it’s pretty obvious to any bystander that SourceHut has requested a stop to the automatic refreshes. The phrase “patrician and sadistic” comes to mind when I think about this situation.

                      1. 11

                        They explicitly have stated in other locations that they have not requested the opt out for automatic refreshes for various reasons.

                        1. 28

                          Sure, filling out Google’s form legitimizes Google’s actions up to that point. Nonetheless, there was a clear request to stop the excess traffic, and we should not ignore that request simply because it did not fit Google’s bureaucracy.

                          1. 8

                            I was specifically responding to

                            I think it’s pretty obvious to any bystander that SourceHut has requested a stop to the automatic refreshes.

                            No, they did not. They have explicitly rejected the option to have them stopped for various reasons, perhaps even the ones you hypothesized.

                            1. 18

                              I appreciate your position, but I think it’s something of a beware-of-the-leopard situation; it’s quite easy to stand atop a bureaucracy and disclaim responsibility for its actions. We shouldn’t stoop to victim-blaming, even when the victims are not personable.

                              1. 6

                                I haven’t taken a position. I’m stating that your statement was factually incorrect. You said that it is “pretty obvious” that they requested something when the exact opposite is true, and I wanted to correct the record.

                                1. 8

                                  You are arguing that they did not fill out the Google provided form. The person you’re arguing didn’t say they did, they said they requested Google stops doing the thing.

                                  1. 5

                                    They did not request that Google stops doing the thing. There is no form to fill out. Literally stating “please stop the automatic refreshes” would be enough. They explicitly want Google to continue doing the thing but at a reasonable rate.

                                    1. 19

                                      They explicitly want Google to continue doing the thing but at a reasonable rate.

                                      Which in my opinion is the only reasonable course of action. Right now Google is imposing a lazy, harmful, and monopolistic dilemma: either suck up the unnecessary traffic and pay for this wasted bandwidth & server power (the default), or seriously hurt your ability to provide Go packages. That’s a false dichotomy, Google can do better. They just won’t.

                                      Don’t get me wrong, I’m not blaming any single person in the Go team here, I have no idea what environment they are living in, and what orders they might be receiving. The result nevertheless makes the world a worse place.

                                      It’s also a classic: big internet companies give us the same crap about email and spam filtering, where their technical choices just so happen to seriously hamper the effectiveness of small or personal email providers. They have lots of plausible reasons for these choices, but the result is the same: if you want your email to get through, you often need their services. How convenient.

                                      1. 6

                                        That’s a false dichotomy, Google can do better. They just won’t.

                                        You may disagree with the prioritization, but they have made progress and will continue to do so. Saying “they just won’t” is hyperbolic and false.

                                        The result nevertheless makes the world a worse place.

                                        You have stated that you don’t know what “environment they are living in, and what orders they might be receiving”. What if instead of working on this issue, they worked on something else that, if left unhandled, would have made the world an even worse place? This statement is indicative of your characteristic bad faith when discussing anything about Go.

                                        I don’t think everything that the Go developers have done is correct, or that every language decision Go has made is correct, but it’s important to root those judgements in facts and reality instead of uncharitable interpretations and fiction.

                                        Because you seem inclined to argue in bad faith about Go both here and in past discussions we’ve had [1], I think any further discussion on this is going to fall on deaf ears, so I won’t be engaging any further on this topic with you.

                                        [1] here you realize you don’t have very good knowledge of how Go works (commendable!) and later here you continue to claim knowledge of mistakes they made without having full knowledge of what they even did.

                                        1. 6

                                          My, the mere fact that you remember my only significant Go thread on Lobsters is surprisingly informative. But… aren’t you going a little off topic here? This is a software distribution and networking issue, nothing to do with the language itself.

                                          You have stated that you don’t know what “environment they are living in, and what orders they might be receiving”. What if instead of working on this issue, they worked on something else that, if left unhandled, would have made the world an even worse place?

                                          That’s a nice fully general counterargument you have there: no matter what fix or feature I request, you can always say maybe something else should take precedence. Bonus points for quoting me out of context.

                                          Now in that quote, “they” is referring to Google’s employees, not Google itself. I’ve seen enough good people working in toxic environments to make the difference between a faceless company and the actual people working there. This was just me trying to criticise Google itself, not any specific person or team.

                                          As for your actual argument, Google isn’t exactly resourceless. They can spend money and hire people, so opportunity costs are hardly a thing for them. Moreover, had they cared about bandwidth from the start, they could have planned a decent architecture up front and spent event less time on this.

                                          But all this is weaselling around the core issue: Google is wasting other people’s bandwidth, and they ought to stop two years ago. When you’re not a superhuman AI with all self-serving biases programmed out of you, you don’t get to play the “greater good” card without a damn good specific reason. We humans need ethics.

                                      2. 10

                                        If you were calling someone several times a day and they said “Hey. Don’t call me several times a day. Call me less often. Maybe weekly,” but you persisted in calling them multiple times a day, it would not be a reasonable defense to say “They don’t want me no not call them, they only want me to not call them the amount I am calling them, which I will continue to do.”

                                        But like, also you should know better than to bother people like that. They shouldn’t need to ask. It is not reasonable to assume a lack of confrontation is acquiescence to poor behavior. Quit bothering people.

                                        1. 3

                                          In your hypothetical the caller then said “Sorry for calling you so often. Would you like me to stop calling you entirely while I figure out how to call you less?” and the response was “No, I want you to continue to call me while you figure out how to call me less.”

                                          That is materially different than a request to stop all calls.

                                          No one is arguing that the request to be called less is unreasonable. I am pointing out that the option to have no calls at all was provided and that they explicitly rejected for whatever reasons they decided. This is not a value judgement on either party, but a clarification of facts.

                                          1. 7

                                            Don’t ignore the fact that those calls (to continue the analogy) actually contained important information. The way I understand it, being left out of the refresh service significantly hurts the ability of the provider to serve Go packages. It’s really a choice between several calls a day and a job opportunity once a fortnight or so; or no call at all and missing out.

                                            Tough choice.

                            2. 21

                              Yes, instead they have requested that the automatic refreshes be made better.

                              Which is a very reasonable request as right now they’re just bad.

                          2. 10

                            I appreciate where you’re coming from with this. Having VERY recently separated from a MegaCorp this is exactly the logic a bizdev person deciding what work gets funded and what doesn’t would use in this situation.

                            But again I ask - is this level of dependence on a corporate owner healthy for a programming language with such massively wide adoption?

                            It would be interesting to do a detailed compare and contrast between the two.

                            1. 4

                              is this level of dependence on a corporate owner healthy for a programming language with such massively wide adoption?

                              Java used to have such a dependence. It wasn’t good indeed.

                          3. 15

                            the proxy will regularly fetch Go packages from their source repository to check for updates – independent of any user requests, such as running go get. These requests take the form of a complete git clone of the source repository, which is the most expensive kind of request for git.sr.ht to service.

                            If only there were a cheap way to keep their cached Git repo in sync with the original one, without having to download the whole thing!

                            1. 15

                              Why was ddevault banned from that github issue? Who at Google made the decision to do that, assuming this is a use of Github’s normal moderation tools by the people at Google in charge of the golang/go repo, rather than something else going on?

                              1. 11

                                Situations like this make me appreciate lobsters’ transparent moderation policy.

                                1. 9

                                  Link for the lazy

                                  “Banned: 1 year ago by pushcx: Please go be loudly disappointed in the entire world (and promote sourcehut) somewhere else.”

                                  1. 19

                                    Drew says he’s been taking active steps to address problems with his communication through the years

                                    https://drewdevault.com/2022/05/30/bleh.html

                                    That said, the internet has a long memory, and the image of “Drew the asshole who was banned for being an asshole” will take a long time to wash away.

                                    1. 15

                                      Good. My experiences with ddevault (some personal, some observed) have made me feel he’s perfectly described by The Dude here: https://www.youtube.com/watch?v=C6BYzLIqKB8

                                      He does seem to have gotten better, and I applaud him for it, but he’s also not quite off my personal hook yet. I will happily use sourcehut, and pay for it, because it’s an excellent service that does something needed in the world. Am I ready to not automatically expect any ddevault post to be overly-abusive to the point of irrelevance? Mmmmm, not yet. Give it a few more years.

                                      1. 11

                                        I have a lot of respect for Drew. He’s a very opinionated curmudgeon who has strong clear ideas about what’s good software and what’s bad software. And that shows in his projects, which are generally very good.

                                        But I have also had the misfortune of interacting with him (through my involvement in the Sway project, specifically my fork of swaylock). And it’s not good. His insistence that my software is garbage (because it does a heavy optional effect on the CPU, because I didn’t rewrite the whole thing to use OpenGL just for one tiny feature…) in a particularly nasty argument is probably one of the reasons why I’m not interacting with the Sway world very much anymore.

                                        The situation at hand seems like one where Drew is rightfully annoyed that Google’s software is treating git hosts poorly, but where he refuses to accept a special-case exception from Google when offered, for reasons I don’t understand.

                                        1. 8

                                          for reasons I don’t understand.

                                          The reasons are explained here: https://news.ycombinator.com/item?id=34313802.

                                          1. 8

                                            Alright, I think I understand. He thinks the “solution” offered by Google is a bad one, and he would rather block their crawler altogether and bring more attention to the issue than accept a solution which only benefits those hosting sites who can contact Google directly. Honestly, that’s not unreasonable. I think maybe it would’ve played even better if this reasoning was detailed in the blog post. As it stands, the post leaves unanswered the huge question of, “why didn’t you accept the solution which Google offered which would’ve reduced load at a lower cost for Go users?”.

                                            (This assumes that the “what does disabling the cache refresh imply?” part is irrelevant. It’s an empirical question which could have been answered pretty easily if there was interest.)

                                            1. 2

                                              Not sure about “answered pretty easily”: https://news.ycombinator.com/item?id=34310674

                                              As it stands, the post leaves unanswered the huge question of

                                              Yup, I feel like hunting HN for ddevault comments did change my understanding of the situation substantially (not implying that that was a good use of my time…)

                                              Honestly, that’s not unreasonable.

                                              Yup yup yup. I’ve actually did something similar with rust-analyzer: on of the early and I estimate retrospectively great design decisions was to unconventionally crash on invalid input, instead of returning error. This was technically wrong (external input causes panic) and also short-term user-hostile (ide fully crashes instead of gracefully degrading if some bit of syntax highlighting somewhere is wrong), but, long term, it allowed to suss out a lot of bugs in our server and various clients, and ultimately I think lead to a better ecosystem. We did put non-crashing workaraounds once the thing became close to being officially recommended though.

                                2. 4

                                  I couldn’t find anything about it but his own story so it seem to check out. It’s worth noting that he is, or at least was, banned from lobsters too. I don’t know him, nor the people administrating these cases, at all so won’t make any guesses as to why, but there seem to be a trend.

                                3. 12

                                  The “Recommendations for the Go team” all seem very reasonable asks that would lead to a better service.

                                  1. 11

                                    I think it’s worth quoting @rsc’s response from the orange site:

                                    The Go team has been making progress toward a complete fix to this problem.

                                    Go 1.19 added “go mod download -reuse”, which lets it be told about the previous download result including the Git commit refs involved and their hashes. If the relevant parts of the server’s advertised ref list is unchanged since the previous download, then the refresh will do nothing more than the ref list, which is very cheap.

                                    The proxy.golang.org service has not yet been updated to use -reuse, but it is on our list of planned work for this year.

                                    On the one hand Sourcehut claims this is a big problem for them, but on the other hand Sourcehut also has told us they don’t want us to put in a special case to disable background refreshes (see the comment thread elsewhere on this page [1]).

                                    The offer to disable background refreshes until a more complete fix can be deployed still stands, both to Sourcehut and to anyone else who is bothered by the current load. Feel free to post an issue at https://go.dev/issue/new or email me at rsc@golang.org if you would like to opt your server out of background refreshes.

                                    [1] https://news.ycombinator.com/item?id=34311621

                                    (@zeebo mentioned this below, but that’s hidden a bit in a discussion and it’s only a link, not a quote.)

                                    1. 20

                                      To be honest, I was worried this was another classic Drew Devault moment where he rages against the dying of the light.

                                      I’m happy to say I was totally wrong :)

                                      This is an eminently reasonable step to take, especially since he’s carefully documented his attempts at working with GOOG on this.

                                      I hope they comply with his requests. They seem eminently reasonable from where I sit.

                                      If they don’t however, IMO it should be a cautionary note to anyone who cares about not giving MegaCorps undue control over the ecosystems and tools we build with.

                                      1. 7

                                        100 per hour

                                        I wonder if throttling the requests to 1 per hour per module would be a compromise that could work for sourcehut.

                                        1. 12

                                          allowing one and then 429? yes, sounds reasonable as well

                                          1. 12

                                            With the apparent lack of coordination between Google’s fetchers, my guess is that this would have the same effect as returning 429 for all requests.

                                            Based on the original post, we can only imagine there’s no coordination between nodes when fetching, but there’s almost certainly some coordination between nodes when deciding what is the most recent version of the cached content. If one node is reporting a newer version, and dozens/hundreds of others are not, the consensus is likely to side with the majority rather than the aberration.

                                            But it would still be interesting to test your theory out.

                                          2. 6

                                            Kinda hoping to see others follow suit until there’s proper solution for this.. From Google’s side.

                                            Not that they care.

                                            1. 7

                                              I wasn’t even aware they had a proxy service (Google), as I am not a Go user. However, that is a neat way of them actually building a database of available packages, doing caching of course.

                                              Count on the Go team to have something which follows the UNIX philosophy instead of some clunky service. That’s really neat!

                                              1. 40

                                                It’s neat to do full git clones of every git repository hundreds of times per hour across a ton of servers instead of a simple HTTP service? Really?

                                                It’s the pinnacle of anti-efficiency, antisocial, “just throw more money at hardware” anti-elegant solution I’ve seen.

                                                1. 5

                                                  Perhaps I misunderstood then from what I read further…

                                                  I meant I found the Google Proxy cool

                                                  1. 3

                                                    It sounds like a cool idea, but badly implemented.

                                                2. 15

                                                  Perhaps we should find names for this pattern, so that we stop thinking of it as a good thing. I see the same pattern that you do: Eve convinces people to use their proxy, and gathers metadata about the network without active participation. We don’t see how Google is using this data, and it’s clear that they will not allow cleanly opting out of the proxy feature without disrupting the ecosystem.

                                                  1. 12

                                                    You can personally opt out of the proxy system by setting the GOPROXY=direct environment variable.

                                                    1. 20

                                                      The fact you need to do this, and I bet 99% of everyone doesn’t even realize it’s there or why it might be desirable, is a symptom of how bad this pattern is.

                                                      1. 2

                                                        Features in other languages and ecosystems? Awesome! The user has control! Features in the Go toolchain and ecosystem? Clearly a symptom of how bad the problem is.

                                                        1. 16

                                                          Most other languages and ecosystems aren’t backed by companies that literally exist to harvest peoples’ information.

                                                          1. 6

                                                            “Features” is too vague to be useful in this conversation. If GCC has a feature to let you statically analyse your code, that can be good, even if it’s bad that Go has an enabled-by-default feature which proxies all traffic through Google servers. If Rust adds a feature to make your head explode if it detects a typo, that would also be a bad feature.

                                                            1. 3

                                                              Fair enough. It’s fine to have a more nuanced conversation about the merits of the feature/architecture.

                                                              I think the proxy architecture is superior to centralized architectures with respect to being able to opt out of centralized collection. Consider that in Rust (and every other package manager I can think of off the top of my head) the default is to get all of your packages through their centralized system, and the way to opt out is to run a copy of the system yourself which is much more difficult than setting an environment variable (you almost certainly have to set one either way to tell it which server to connect to) and still doesn’t allow you to fully avoid it (you must download the package at least once).

                                                              You may have rational or irrational fears of Google compared to other faceless corporations, but that’s not an indictment of the feature or architecture. Additionally, the privacy policy for the proxy even beats crates.io in some ways (for example, crates.io will retain logs for 1 year versus 30 days).

                                                              1. 3

                                                                Consider that in Rust (and every other package manager I can think of off the top of my head)

                                                                Another exception would be deno: https://deno.land/manual@v1.29.2/basics/modules#remote-import

                                                        2. 0

                                                          More evidence that the issue people are upset about here is psychological and has almost nothing to do with the facts at hand.

                                                          1. 9

                                                            It should be opt-in, not opt-out.

                                                            1. 3

                                                              From Google’s perspective it’s reasonable to use opt-out here, otherwise nobody would have configured those GOPRIVATE, GONOPROXY, GONOSUMDB, making the sumdb and package index pretty pointless. However, from a user perspective I feel that opt-in would have been the friendlier option.

                                                        3. 5

                                                          Not everything is that black and white. While I don’t think Google would pass on using any metadata they can get their hands on, there are also benefits for the community. They list some here in How Go Mitigates Supply Chain Attacks

                                                          Having witnessed several drama stories in the Node and Python package ecosystem, I think I prefer Google’s approach for now.

                                                          1. 4

                                                            Maybe packages were a mistake. Maybe there are better ways to share software.

                                                            1. 3

                                                              Sounds compelling. Do you have any suggestions?

                                                              1. 5

                                                                I don’t have a complete concrete proposal. I am currently hacking on a tacit expression-oriented language which is amenable to content-addressing and module-free code reuse; you could check out Unison if you want to try something which exists today and is similar to my vision.

                                                                I’ve tried to seriously discuss this before, e.g. at this question, but it seems that it is beyond the Overton window.

                                                          2. 2

                                                            How is this situation any worse than a registry based system like npm or cargo?

                                                        4. 2

                                                          Wait I don’t get it? So go package manager allows HEAD version? I mean if it would only allows tags, then I don’t see why it should clone N times a day.

                                                          1. -4

                                                            Don’t feed the trolls.

                                                            1. 1

                                                              Was this meant as a reply to another comment?