This is one of the major downsides of Go using Git repos as its “source of truth”. Yeah, it’s super convenient, but everything in Git is mutable with enough determination. While best practices suggest tags will only ever point to a specific commit, you can’t guarantee that based on how Git works.
Basically, while it is vastly better than it used to be, Go’s dependency management story is still not great. And I’m not smart enough to figure out how to fix it - the current interface is pretty well established and I don’t see how anyone could build a system with the same interface that functions better.
Nix and Gentoo/Portage use git repos, along with refs and hashes, and this sort of mutation wouldn’t be possible under those build systems. The solution is to pin precise versions with a lockfile.
I agree. The go.sum file tries to do this, but is obviously not precise enough. This, plus the discussion of how the Go Proxy works elsewhere in this thread, it is incredibly easy to break builds.
Linking https://mywiki.wooledge.org/BashPitfalls, beware pitfalls such as treating filenames as lines (or in general, parsing ls whose output can differ between implementations).
As an aside, I like less -K because ^C will not just cause it to exit – it will specifically exit non-zero (as opposed to q which exits zero). Useful when I want to check a file before proceeding with a script. For example, I have a short wrapper script to build PKGBUILDs from the AUR that does (roughly):
less -K PKGBUILD || die 'Aborted after reviewing PKGBUILD.'
makepkg -srci
This way while reviewing the PKGBUILD, I can q if it looks good, ^C if I see something wrong with it, and/or v to drop into an editor, write and quit, then proceed with q again.
While this is a neat project, the copy on their homepage rubs me wrong.
Self-hosting your mail server has been notoriously difficult. Not anymore! Stalwart is a scalable, secure and robust open-source mail server software designed for the 21st century.
I ran my own mailserver for about a year, using Mail-in-a-Box, and the issues and difficulties I ran into while running the server had basically zero association with scaling and security (at least of the server itself). All of my difficulty stemmed from keeping my IP(s) off of blacklists. I had a roughly three-month stint where I had to keep requesting my IP be removed from Microsoft’s blacklist because my IP was apparently close enough to other IPs used by spammers, and that convinced me to relegate my email to a provider (FastMail) instead of self-hosting.
But, I’m also betting this software isn’t really intended for users like me wanting to host email for a handful of users.
this is def mocking people who find mastodon hard to understand & im not sure how i feel about it.
on one hand, xe is right, e-mail/mastodon are not simple or intuitive.
on the other hand, people were expecting a global twitter-like feed and landed in a world a lot more like a bunch of mailing lists, and are rightly confused
i have sympathy for the people who were misled into believing that mastodon is a suitable twitter replacement, but i don’t think they should be convinced. what they desire and what mastodon is simply don’t align.
i have sympathy for the people who were misled into believing that mastodon is a suitable twitter replacement
I came to Mastodon looking for a Twitter replacement and I am very happy with it, I don’t feel mislead at all. It’s pretty close to what I always wanted Twitter to be. Your outcome will greatly depend on what you are looking for in such a service. j3s’s comment sums it up well.
I think the biggest thing is that Mastodon isn’t what Twitter is, it’s what Twitter was. There are some people who don’t remember the early days of Twitter that are expecting Mastodon to be Twitter as it is now, and that is the source of the dissonance.
You might be right about the majority of twitter users but many people in my social bubble did everything to avoid the global algorithmic feed and worked with TweetDeck or lists to get a linear timeline.
in my opinion, the primary differences are that mastodon has:
no single global public firehose of posts
no algorithmic suggestions based on that firehose (who to follow, what to read)
very few famous people / companies
mastodon is much more suited to building smaller, tighter-knit communities. once i started approaching mastodon that way, it clicked with me instantly. i found a small server that i love & relish spending my time on.
twitter always felt like a perpetual adrenaline machine from which there was no escape, and i found myself growing a lot of resentment there. the difference is night & day
for people who like being spoonfed content, following famous people/companies, and shitposting with their friends in public, mastodon is no substitute for twitter - not even close.
Twitter doesn’t have a single firehouse feed either though? Not for many years. There is the “trending” stuff if you use the web app, which isn’t very useful but I guess doesn’t suggest a small slice of posts from outside your feed.
A global Twitter-like feed doesn’t have petty bickering between server admins, poor search functionality, missing Twitter features (notably, quote retweeting (implemented in some frontends nobody uses for some reason)), and annoying UX (if I open someone’s post on their server, I have to copy their post’s URL to my server to access and retweet it).
I have used neither, but my partner was using Twitter and has been trying Mastodon. The main difference is discoverability. People, mostly, don’t use Twitter to talk to people they know, they use it to consume posts from people with particular interests. With Mastodon, if you find a server with a hub of people that are interested in something specific, then you can find a load of them but then you have to look at the people that they connect to on other servers and explicitly follow them. In contrast, Twitter will actively suggest people based on a profile that it builds of your interests (which is also how it puts people in echo chambers full of Nazis, so this isn’t a universally positive thing).
Mastodon at the moment feels like Yahoo! back in the ‘90s: it’s a fairly small curated list of topics where a lot of interesting things were simply not indexed. Twitter feels more like Google in the late ‘90s: it’s a streamlined interface that will find a much broader set of things, with a lower signal to noise ratio but a lot more signal.
I’m always uneasy when reading articles like “SQL is overengineered”, “git is complicated”, “(some other core technology) is hard”.
Especially with Prometheus query, I know I’m repeating myself but I think that PromQL, like SQL, Git, IP, PKCS, … is part of the software engineer toolbox. There should be no corner cutting, IMHO. The technology should be learned and mastered by anybody who want to qualify as a software “craftperson.” I’m more and more sadden at the lowering of the standard of my profession… But I might just have become an old man… Maybe you shouldn’t listen to me.
I’m fine with learning difficult technologies, but PromQL just seems poorly designed. Every time I touch it and try to do something well within the purview of what a time series database ought to be able to do, it seems there isn’t a good way to express that in PromQL—I’ll ask the PromQL gurus in my organization and they’ll mull it over for a few hours, trying different things, and ultimately conclude that hacky workarounds are the best case. Unfortunately it’s been a couple of years since I dealt with it and I don’t remember the details, but PromQL always struck me as uniquely bad even worse than git.
Similarly, the idea that software craftsmen need to settle for abysmal tools—even if they’re best in class today—makes me sad. What’s the point of software craftsmanship if not making things better?
Every time I touch [Prometheus] and try to do something well within the purview of what a time series database ought to be able to do…
One big conceptual thing about Prometheus is that it isn’t really a time series database. It’s a tool for ingesting and querying real-time telemetry data from a fleet of services. It uses a (bespoke and very narrowly scoped) time series database under the hood, yes — edit: and PromQL has many similarities with TSDB query languages — but these are implementation details.
If you think of Prometheus as a general-purpose TSDB then you’re always going to end up pretty frustrated.
Could you expand on that more? I’m curious what features/aspects of a general TSDB you’re referring to Prometheus lacking. (This is a curiosity coming from someone with no experience with other TSDBs)
It’s not that Prometheus lacks any particular TSDB feature, because Prometheus isn’t a (general-purpose) TSDB. Prometheus is a system for ingesting and querying real-time operational telemetry from a fleet of [production] services. That’s a much narrower use case, at a higher level of abstraction than a TSDB. PromQL reflects that design intent.
I mean, I’m using it for telemetry data specifically. My bit about “ordinary time series queries” was mostly intended to mean I’m not doing weird high-cardinality shit or anything Prom shouldn’t reasonably be able to handle. I’m not doing general purpose TS stuff.
Gotcha. I’d be curious to hear a few examples of what you mean, just to better understand where you’re coming from. Personally, I’m also (sometimes) frustrated by my inability to express a concept in PromQL. In particular, I feel like joining different time series on common labels should be easier than it is. But it’s not (yet) gotten to the point that I consider PromQL to be poorly designed.
Yeah, unfortunately it’s been a while and I’ve forgotten all of the good examples. :/ Poorly designed feels harsh, but suffice it to say I don’t feel like it’s clicked and it seems like it’s a lot more complicated than it should be.
I’ve wondered about this as well – how much of the difficulty has to do with a) working with time series b) PromQL syntax c) not knowing what metrics would actually be helpful for answering a given situation d) statistics are hard if you’re not familiar or e) a combination of the above.
I’m curious if folks that have used something like TimescaleDB, which I believe uses a more SQL-flavored query syntax, have had a very different experience.
In my experience, it’s been a combination of everything you’ve listed, with the addition of (at least my) teams not always being good about instrumenting our applications beyond the typical RED metrics.
I can’t speak for TimescaleDB, but my team uses AWS Timestream for some of our data and it’s pretty similar as far as I can tell. Timestream’s more SQL-like syntax makes it both easier and harder to write queries, I’ve found. On the one hand, it’s great because I grok SQL and can write queries pretty quickly, but on the other hand I can start expecting it to behave like a relational database if I’m not careful. I’d almost rather just use PromQL or something like it to create that mental separation of behaviors.
I’m more and more sadden at the lowering of the standard of my profession
I see the reverse. Being willing to accept poor-quality UIs is a sign of low standards in a profession. Few other skilled trades or professions [1] contain people using poorly designed tools and regard using them as a matter of pride. Sometimes you have to put up with a poorly designed tool because there isn’t a better alternative but that doesn’t mean that you should accept it: you should point out its flaws and encourage improvement. Even very simple tools have improved a lot over the last few decades. If I compare a modern hammer to the one my father had when I was a child, for example, mine is better in several obvious ways:
The grip is contoured to fit my hand better,
The head and handle are now a single piece of metal so you don’t risk the head flying off when the wood ages (as happened with one of his).
The nail remover is a better shape and actually works.
If carpenters had had your attitude then this wouldn’t have happened: a mediocre hammer is a technology that should be learned and mastered by anybody who wants to qualify as a “craftperson”. My hammer is better than my father’s hammer in all of these ways and it was cheap because people overwhelmingly bought the better one in preference.
Some things are intrinsically hard. Understanding the underlying model behind a distributed revision control system is non-trivial. If you want to use such a tool effectively, you must acquire this understanding. This is an unavoidable part of any solution in the problem space (though you can avoid it if you just want to the tool in the simplest way).
Other things are needlessly hard. The fact that implementation details of git leak into the UI and the UI is inconsistent between commands are both problems that are unique to git and not to the problem space.
As an industry, we have a long history of putting up with absolutely awful tools. That’s not the attitude of a skilled craft.
[1] Medicine is the only one that springs to mind and that’s largely due to regulators putting totally wrong incentives in place.
I agree with you, although I think it’s worth calling out that git has at least tried to address the glaring problems with its UI. PromQL has remained stubbornly terrible since I first encountered it, and I don’t think it’s just a question of design constraints. All the Prometheus-related things are missing what I consider to be fairly basic quality-of-life improvements (like allowing you to name a subexpression instead of repeating it 3 times).
Maybe PromQL also has limitations derived from its limited scope, but frankly I think that argument is… questionable. (It doesn’t help that the author of this article hasn’t really identified the problems very effectively, IMO.) The times I’ve resorted to terrible hacks in Prometheus I don’t think I was abusing it at all. Prometheus is actively, heavily, some might say oppressively marketed at tech people to do their operational monitoring stuff. But on the surface it’s incapable of anything beyond the utterly trivial, and in the hands of an expert it’s capable of doing a handful of things that are merely fairly simple, usually with O(lots) performance because you’re running a range subquery for every point in your original query.
As an aside, I think the relentless complaining about git’s model being hard to understand is not helping in either dimension. Saying “DVCS is too hard, let’s reinvent svn” doesn’t stop DVCS being useful, but it makes people scared to learn it, and it probably makes other people think that trying to improve git is pointless, too.
This is a very interesting point. I hear you in the general case (and I’ll also say that actually working more with PromQL has given me a lot of respect for it).
I think it’s easier to make that argument for tools that people use on a daily or at least very regular basis. Depending on the stage of company you’re at, to what extent your job involves routinely investigating incidents, etc, PromQL may be something you reach for more or less frequently. It’s quite a different paradigm than a lot of other programming tools, so it makes sense to me that engineers who are new to it or don’t use it frequently would have a hard time. Also, speaking as someone who learned it pretty recently, the materials for learning it and trying to get to a deeper level of understanding of what you can and might want to do with it are…sparse.
I think you nailed it - in many cases you don’t touch Prometheus until you’re investigating some issue and that’s often when it’s urgent, and doing so using an unfamiliar query language is a recipe for pain. Of course, you could set aside some time to learn it, but if a lot of time passes until you need it again, those skills will have faded again.
git is hard to learn compared to some of its competitors but has become ubiquitous enough, and once you start using it daily you will learn it properly in no time. Learning additional stuff about it becomes easier too once you have a good foundation and it will stick around better, as well. For SQL I’d argue the same - at uni I almost flunked my SQL course due to simply not grokking it, but I’ve worked with it so much that I’m currently one of the company’s SQL experts.
I’ve been away from the Go world for a few years now. I was excited when go fuzz came out; I missed the introduction of go mod and generics. I have confidence that they are well-designed, but I find the documentation… impenetrable.
The article says go mod init is enough. Not sure what it does, but fine. We do loads of $TOOLCHAIN_NAME init these days.
The term “module” is overloaded in the industry: is my application a module, or does it have modules? If my code is supposed to be modular, then maybe the latter? Let’s check the official docs. My options are either this 79-page-long reference, or a series of blog posts from four years ago that may or may not be up-to-date.
I guess at least that tells me what the motivation for this was, but this sort of thing is why I don’t feel comfortable going back to Go, until another edition of the blue book comes out.
Yeah, I’ve read all of that Go mod stuff, so I’ve internalized it, but it’s a lot. 😂 It doesn’t help that Go modules work significantly differently than other systems so if you just assume how it should work, you’ll probably assume wrong. That said, it’s actually pretty easy to use in practice and I can’t say I’ve run into any problems with it.
A quick summary:
If you just want to run one file that just uses the standard library, you can just do go run file.go. But if you want to have a collection of files that import other collections of files, you need to explain to the Go compiler how all the files relate to each other. In Go, a “package” is one folder of .go files and a “module” is a collection of packages. (This unfortunately is the opposite of Python, where a “module” is one file and a “package” is a folder or a collection of folders.)
To start a Go project, run go mod init name and it will write out a go.mod file for you, or you could just handwrite a go.mod file. What should “name” be? Well, the name is going to be the import root for your project. If you’re not going to publish your project online, it should probably be something like go mod init janedoe/fooproj and then in your packages, you would do import "janedoe/fooproj/utils" to import your utils folder. When you start a spam project, you can do go mod init janedoe/spam etc. It’s probably not a good idea to just do go mod init bar because if some future version of Go adds “bar” to the standard library, you would have to change your module name. OTOH, you can just fix it when that happens. If you are going to publish your project online, the name should be a URL where the project can be downloaded. In most cases, one code repository should be one module and vice versa, although you are technically allowed to have multiple modules in a repository. If you need to work across multiple code repositories, there’s a “workspaces” mode for it. You probably won’t need it.
The go.mod file contains the name of your project (module) and the names and version numbers of any third party packages you want to import. There’s a separate go.sum file that contains checksums for the modules you import. To add an import to your project, you can just add the import to a .go file like import "rsc.io/sampler" and then run go mod tidy and Go will download the project and add it to go.mod for you along with the checksum in go.sum. Alternatively, you can run go get URL@v1.2.3 and the import will be downloaded and added to go.mod as a pending import. Be sure to run go mod tidy again after you use the import in your project, so it’s moved to the list of used imports.
Versioning is a little quirky. It just uses normal Git tags with semantic versioning, but when a project reaches v2+, the import path for a project has to include /v2 or whatever in it. The idea is that you are allowed to import github.com/whatever/project/v2 and github.com/whatever/project/v3 simultaneously, but you can’t import v2.0.1 and v2.0.3 simultaneously, so the major version number goes into the import path to be explicit about which one you mean. It’s a little ugly, but causes no problems if you don’t try to fight it. (Like a lot of Go.)
And that’s basically enough to get started. The rest you can just look up in the go tool help and Wikis as needed.
Versioning is a little quirky. It just uses normal Git tags with semantic versioning
How much is go get and the toolchain in general coupled to Git as a VCS? IIRC, you could go get from HTTP endpoints in the past. I see that, in the rsc.io/sampler example you gave, there’s this line:
Go also supports Mercurial, SVN, Bazaar, and Fossil. There’s some process by which the VCS calls can be replaced with HTTP calls for efficiency. See https://go.dev/ref/mod#module-proxy Trying to understand all the specifics is how you end up with impenetrable 100 page docs. Mostly you can just ignore it assume it works somehow.
Versioning is a little quirky. It just uses normal Git tags with semantic versioning, but when a project reaches v2+, the import path for a project has to include /v2 or whatever in it. The idea is that you are allowed to import github.com/whatever/project/v2 and github.com/whatever/project/v3 simultaneously, but you can’t import v2.0.1 and v2.0.3 simultaneously, so the major version number goes into the import path to be explicit about which one you mean. It’s a little ugly, but causes no problems if you don’t try to fight it. (Like a lot of Go.)
Okay, so wait, you’re supposed to keep all sources in your repo?
If I understand the question, no. Keeping your sources in your repo is called vendoring. There’s a mode for it in the go tool, but by default your sources are kept in a user cache directory somewhere.
By git tag, I mean to publish a library, you just add a Git tag to your library repo and then consumers can just specify that they want that version. The versions must be semver style and when a library publishes v2.3.4, a consumer would import it with import "whatever/v2".
You don’t have to, but it’s supported and encouraged - and I’ve been so pleasantly surprised how useful an approach it is that I’ve adopted it for projects in other languages also.
go mod init just creates a file at the root of the current directory called go.mod, containing the name of your module (either pre-determined if you’re still creating projects in $GOPATH or specified when you run the command) and the Go version used to develop the module (i.e. the version of Go that is run when you use the go command to do things - this matters for the compiler mostly).
As for if your application is a module or has modules - the answer is yes. It’s both. If you publish an application written in Go publicly that has a go.mod file, I can import it as a module into my own application.
Other than that, modules are confusing and probably one of the more painful parts of Go, for me at least.
A module is a thingy that lives in one place (probably git), that contains one or more Go packages, that has a version, and that can depend on and be depended on by other modules. That’s about all there is to it. You don’t have to do all that much to interact with them, particularly if you’re writing an app — just go mod init when you start (just like this thing says), and then whatever you go get while inside your app’s directory structure will be local to your app, and recorded in your go.mod so that you’ll get the same set of deps if you build it on another machine.
The article ends with a pitch to use the author’s product Nango that advertises support for supporting OAuth2 flows for 90+ APIs and justifying the existence of the product.
Another result is that I see many people invent their own authentication flows with JWT and refresh tokens from scratch, even though OAuth2 would be good fit.
I think a custom flow like mentioned here is a better use of my time than trying to figure out how to implement OAuth2. And in many cases, developers using the API would probably agree. Debugging a client is painful enough with OAuth2 in part because many clients try to abstract over the messy details (because there are so many such details).
So why would someone implement OAuth2 over a custom thing? The API itself will be custom too, so what’s the big deal in using a standardized authentication flow?
I think what’s missing is a great documentation on how to cover common case with oauth2.
There’s so little good ones that you’re left piecing together many resources and create a mess yourself.
Isn’t that the core argument of all of this? The real issue with OAuth2 is that you never know which flavor (custom extensions) you’re getting. Not every server supports the discovery endpoints, some servers don’t fully implement the spec, some only support the spec if Mercury is in retrograde and Saturn is ascendent, &c. That’s where the pain comes in as a client, all those messy little details. The core is fine, but you start loading it down with optional boondoggles and suddenly the paved path becomes a maze whose branches have varying levels of unkemptness.
So instead of going with an OAuth2 provider (assuming you don’t “get it for free” like if your company uses Okta for SSO), you decide to write your own auth server. You could build a spec-compliant OAuth2 server (seriously, read the RFCs, it’s not terrible to do), or you could whip a custom thing that issues JWTs that contain exactly the data your stuff needs to do the actual money making. Building the OAuth2 server only makes sense when there’s a strong business need for it and you know there’s going to be a need for external parties to authenticate in the exact same way as internal parties and you care about being able to point to a “standard” spec to save on documentation.
This is all about laptops. I understand a lot of people really need a portable computer, but I think it is worth considering what it is asking to ask for a computer that you can take everywhere with you and it still lasts for 50 years. The desktop hardware I currently have will probably last decades if looked after, some minor parts might fail and need to be replaced but the big expensive components should last a good long time. It also does not suffer from many of the drawbacks mentioned here.
A laptop has to handle impacts, being dropped, being sat on, getting wet, getting hot, getting cold, in 50 years of constant use all of these will happen. Don’t get me wrong I would love to see a ruggedised laptop on the market that could last 50 years but I probably wouldn’t rush out to buy it. The extra cost and weight would probably be significant, and I would want it to be constructed according to the principles of ‘right to repair’ and upgradeable in a modular way which is a whole other layer of problems.
In short, while I agree with pretty much everything in the article, it sounds like a description of a desktop pc running open source software, written in a parallel universe where no such thing exists.
Man, remember netbooks? One of those, a wireless hotspot and a VPN/tunnel of some sort to a home desktop used to be my dream computing setup. I get the unbridled compute of a desktop-class processor with a super-portable method of access.
Granted, this would not have been great from the perspective of screen real estate, but I’m kind of sad there don’t seem to be any devices like that any more.
I haven’t watched the video yet, but the answer is fmt.Errorf("custom message goes here: %w", err). The %w allows the error to be unwrapped using errors.Unwrap, but you also get to add your own context if you want/need it.
in small systems, this is definitely all you need, larger more complex applications definitely benefit more from structured errors and structured logging.
While I understand that any private entity can choose its customers and change its terms at any time, I think more thought should be put into these kinds of decisions…
While I completely agree that any “threatening or harassing” or any other kind of illegal activity or content should be dealt with, I don’t think blockchain and crypto-currency related source code falls under any of these categories. Sure, many blockchain and crypto-currency “projects” might actually be financial ponzi schemes and detrimental to both the society and the environment, but banning all such related projects is quite an overreaction…
For example, on an unrelated project I’ve searched for some technical solutions and found two outcomes of crypto-currency projects that are just extremely original and useful even outside the crypto-currency context: https://github.com/BlockchainCommons/LifeHash and Bech32 encoding. Would the code for such projects be banned from SourceHut? (Perhaps not if hosted individually outside a blockchain or crypto-currency repository, but who knows, their main purpose is for crypto-currencies.)
Moreover it creates a precedent… If the owners of SourceHut tomorrow decide they don’t like say AI-based image generation projects will they ban those next? Or perhaps (open-source) projects that when run display ads? Where does the banning stops once it gets started?
While I don’t use (pay for) SourceHut myself (thus I perhaps shouldn’t have an opinion about these changes) I did keep a close eye on SourceHut developments as an alternative to GitHub, but this change in their terms made me think twice about possibly moving there…
As said in the beginning, it’s their company thus it’s their decision…
The precedent to ban projects already exists across providers. They generally have terms of service that excludes certain kinds of content. Take the child abuse example from the post.
This isn’t as slippery of a slope as you make it seem, and as you already pointed out they can choose their customers and their terms. It just sounds like this choice is one you disagree with.
Look what happened when the media industry got mad at youtube-dl for “assisting piracy” or whatever–when it covers so many use cases. Meanwhile, piracy is often the best archival system we have for digital media as those corps let it rot.
I am curios what the thought process was? Which were the highlighted advantages and disadvantages of such a decision (both for the SourceHut business and for the development community at large)? How much good would banning any crypto-currency and block-chain related project achieve? (Are there many such projects hosted on SourceHut?)
The only reasoning behind this decision is given in the beginning of the article:
These domains are strongly associated with fraudulent activities and high-risk investments which take advantage of people who are suffering from economic hardship and growing global wealth inequality. Few to no legitimate use-cases for this technology have been found; instead it is mostly used for fraudulent “get rich quick” schemes and to facilitate criminal activity, such as ransomware, illicit trade, and sanctions evasion. These projects often encourage large-scale energy waste and electronics waste, which contributes to the declining health of Earth’s environment. The presence of these projects on SourceHut exposes new victims to these scams and is harmful to the reputation of SourceHut and its community.
Not to mention the following suggestion:
Projects which seek out cryptocurrency donations are strongly discouraged from doing so, but will not be affected by this change.
Getting back to the listed reasons:
“to facilitate criminal activity” – Tor is especially useful in such scenarios; will SourceHut ban the Tor project (or other similar onion-routing projects)? also especially useful are the many “end-to-end encrypted” chat applications; should we ban them?
“often encourage large-scale energy waste” – hell, the whole AI/ML landscape is another prime example of technologies that just waste energy for trivial reasons such as image generation; perhaps we should ban also this category?
“electronics waste” – how about the gaming industry, which push users to continuously upgrade their good-enough hardware; should we start banning games that require newer hardware?
“illicit trade” – anything actually can be used facilitate such purposes;
“sanctions evasion” – and in the end, any open-source project that can be used by anyone in a sanctioned country, just allowed that country (in some manner) to evade sanctions by using the labor of the open-source developers in the sanctioning countries; should we start banning access from such sanctioned countries?
I don’t know anything more about SourceHut’s reasoning than you - I’ve just read the same announcement - but it seems to me that it isn’t being banned because of the uses you mentioned, it’s because those are (nearly) the only uses. So the fact that other technologies can also be used for those purposes isn’t really relevant.
We will exercise discretion when applying this rule. If you believe that your use-case for cryptocurrency or blockchain is not plagued by these social problems, you may ask for permission to host it on SourceHut, or appeal its removal, by contacting support.
I think this is more than fair and also answers your question. They are specifically mentioning the exact target of these terms, and they are the projects which you call ’“threatening or harassing” or any other kind of illegal activity or content ’.
I could see those two projects being “unbanned” or whatever if they filed an appeal with Drew. I know in the past they (Sourcehut) had issues with crypto-related repositories using their CI service to mine, and ended up removing access to CI for non-paying members.
I pretty much agree with the decision they made, because if something is going to be negative 99% of the time, I’m just going to automate around most cases being negative while providing an out for the 1% chance it’s positive.
Regardless of how someone feels about these changes, they seem to be well implemented and alternatives readily provided through the use of standard formats. It’s nice to see these sorts of changes being communicated clearly and with plenty of time.
It’s a really nice platform. I use it exclusively for personal projects now, and I’m loving it. I haven’t done much collaboration on the platform, so I can’t say much about that, but otherwise it’s great.
I know Drew kind of built a reputation for himself, and think what you want of him, but he’s doing right by FOSS with Sourcehut, I feel.
This is really timely - I’ve been wanting to set up a Nomad cluster for my personal servers, and it’ll be nice to have this to reference alongside my own research and Hashicorp’s own documentation.
Edit: now that I’ve had time to read the full article, it’s definitely something I’ll use as a guide. Hashicorp’s documentation has been hard to sift through for actually setting all of their services up together, so it’s really helpful to see this basically say “start with Consul, then Nomad, and then Vault”.
IMO, it is a bit more clouded than that. I love the Air for its low weight. But the 14/16” Pros have other nice benefits over the Air. You absolutely need the Pro if you need (besides more performance):
Hook up more than one external screen.
Need more than 24GB RAM.
Besides that the Pro 14”/16” have other benefits:
3 Thunderbolt ports, including one on the right-hand side, which can be handy when docking the MacBook besides a display.
Much better display.
HDMI/SD card slot.
Also, the price difference between the 8 CPU core/10 GPU core Air M2 and the baseline MacBook Pro 14” is very small (especially because many stores have discounts on the 14” now).
I’d get the Air if you prefer the lower weight and passive cooling over anything else.
Oh man, that bit on Vault Sidecar Injector hit hard. My team uses Vault (self-hosted, so you can guess where this is going), and it’s basically a glorified key-value store. We talk about using the value-adds that Vault brings, but never make progress because instead of letting one person dive deep and become an expert, we shuffle ticket assignments around and let someone learn just enough to say “yeah, this is feasible but I don’t know how to get us there”.
ip -br doesn’t work and neither does ip -c or ip -br -c.
And now I finally grasped that it would output the same as 'ip' - so it is -brief -color. Sorry, brainfart (ofc I searched before I asked that last question..) :P
Apparently I have never used either flag and didn’t notice them in the manual. – signed, someone gowing up with ifconfig
I was going to post this as my most useful, because I just it all the time.
I also have ...="cd ../.." and so on for going up more levels. Probably not useful beyond four or five levels due to how quickly can you count how deep you are in the CWD.
Edit: Just to be clear, I’m talking about me visually counting up how many levels deep I am in the directory tree. Beyond three or four, I tend to just go up that much, and then look and see where I am and maybe do it again, with my finger hovering over the ‘.’ key. I don’t have a problem rapidly tapping out the ‘.’ nine times to go up 8 levels, the difficulty (for me) is determining that I want to go up 8, vs. 7 or 9 levels.
Even more fun (only works in zsh, as far as I know):
function rationalize-dot {
if [[ $LBUFFER = *... ]]; then
LBUFFER=${LBUFFER[1,-2]}
LBUFFER+=/..
else
LBUFFER+=.
fi
}
zle -N rationalize-dot
bindkey . rationalize-dot
You can make this even better by adding setopt auto_cd to your config, so that if you type a directory path zsh automatically changes to that directory.
Interesting. I’ve never tried to install / use those “smart” cd replacements, where you can type “cd foobar” and it looks at your recent working directories to find a “foobar” and go there.
I was thinking about a variant of your up function that does something like that, where I can type “up foo” in the current directory:
/home/username/foobar/one/two/three/four/
And so it just looks into successive parent directories for anything matching “foo”, and the first one it finds is the destination.
Your point stands about the mistake, but just to clarify the terminology: Duration is a defined type, not an alias (alias is specific Go terminology which means it behaves exactly the same as that type). The reason this mistake compiles is because literals in Go are “untyped constants” and are automatically converted to the defined type. However, these will fail, because s and t take on the concrete type int when they’re defined:
var s int
s = 5
time.Sleep(s)
t := 5
time.Sleep(t)
Go doesn’t allow for operator overloading, which I’m kind of okay with. It tends to add complexity for (what I personally consider to be) little benefit.
On the other hand, this is the kind of case that really makes the argument for operator overloading. Having to use a bunch of alternate specific-to-the-type function implementations to do common operations gets tiresome pretty quickly.
So Go has different operators for adding floats and for adding integers? I have seen that in some languages, but it’s nevertheless quite unusual. OTOH, I can see that it reduces complexity.
I agree it is annoying. Would a ‘fix’ be to alter/improve the type inference (assuming that some_multiplier is only used for this purpose in the function) so that it prefers time.Duration to int for the type inferred in the assignment?
I’m not sure it would be an incompatible change - I think it would just make some incorrect programs correct. Even if it was incompatible, maybe go2?
While I do think Go could do more to work with underlying types instead of declared types (time.Duration is really just an alias for int64, as a time.Duration is just a count of nanoseconds), it does make sense to me to get types to align if I want to do arithmetic with them.
My perfect world would be if you have an arbitrary variable with the same underlying type as another type, you could have them interact without typecasting. So
var multiplier int64 = 10
delay := multiplier * time.Second
would be valid Go. I get why this will probably never happen, but it would be nice.
That’s how C-based languages have worked, and it’s a disaster. No one can keep track of the conversion rules. See https://go.dev/blog/constants for a better solution.
If you define some_multiplier as a constant, it will be an untyped number, so it will work as a multiplier. Constants are deliberately exempted from the Go type system.
I’m hoping to do a lot of digital housecleaning this week - I want to migrate my self-hosted email to Fastmail (unless there’s some compelling reason to go somewhere else). I want to finally figure out my org-mode setup. I’m also going to start rewriting my blog using Remix for learning purposes. It’s built using Hugo right now, and I’m just not a fan of how Hugo works (really I’ve just struggled to get my site to look the way I want and if I build my own I will hopefully be able to do that better).
On a personal fitness note, last Thursday I did my first toes-to-bar, and I’m going to continue practicing that.
I mean, their docs are surprisingly good. They’re all I’ve really needed. To start, I’d suggest reading up on IAM, ECS, and probably S3. From there, just read the docs on whatever you need as you realize you need it.
I find the complaints about Go sort of tedious. What is the difference between using go vet to statically catch errors and using the compiler to statically catch errors? For some reason, the author finds the first unacceptable but the second laudable, but practically speaking, why would I care? I write Go programs by stubbing stuff to the point that I can write a test, and then the tests automatically invoke both the compiler and go vet. Whether an error is caught by the one or the other is of theoretical interest only.
Also, the premise of the article is that the compiler rejecting programs is good, but then the author complains that the compiler rejects programs that confuse uint64 with int.
In general, the article is good and informative, but the anti-Go commentary is pretty tedious. The author is actually fairly kind to JavaScript (which is good!), but doesn’t have the same sense of “these design decisions make sense for a particular niche” when it comes to Go.
What is the difference between using go vet to statically catch errors and using the compiler to statically catch errors?
A big part of our recommendation of Rust over modern C++ for security boiled down to one simple thing: it is incredibly easy to persuade developers to not commit (or, failing that, to quickly revert) code that does not compile. It is much harder to persuade them to not commit code where static analysis tooling tells them is wrong. It’s easy for a programmer to say ‘this is a false positive, I’m just going to silence the warning’, it’s very difficult to patch the compiler to accept code that doesn’t type check.
What is the difference between using go vet to statically catch errors and using the compiler to statically catch errors?
One is optional, the other one is in your face. It’s similar to C situation. You have asan, ubsan, valgrind, fuzzers, libcheck, pvs and many other things which raise the quality is C code significantly when used on every compilation or even commit. Yet, if I choose a C project at random, I’d bet none of those are used. We’ll be lucky if there are any tests as well.
Being an optional addition that you need to spend time to engage with makes a huge difference in how often the tool is used. Even if it’s just one command.
(According to the docs only a subset of the vet suite is used when running “go test”, not all of them - “high-confidence subset”)
When go vet automatically runs on go test, it’s hard to call it optional. I don’t even know how to turn if off unless I dig into the documentation, and I’ve been doing Go for 12+ years now. Technically gofmt is optional too, yet it’s as pervasive as it can be in the Go ecosystem. Tooling ergonomics and conventions matter, as well as first party (go vet) vs 3rd party tooling (valgrind).
That means people who don’t have tests need to run it explicitly. I know we should have tests - but many projects don’t and that means they have to run vet explicitly and in practice they just miss out on the warnings.
Even in projects where I don’t have tests, I still run go test ./... when I want to check if the code compiles. If I used go build I would have an executable that I would need to throw away. Being lazy, I do go test instead.
Separating the vet checks from the compilation procedure exempts those checks from Go’s compatibility promise, so they could evolve over time without breaking compilation of existing code. New vet checks have been introduced in almost every Go release.
Compiler warnings are handy when you’re compiling a program on your own computer. But when you’re developing a more complex project, the compilation is more likely to happen in a remote CI environment and making sure that all the warnings are bubbled up is tedious and in practice usually overlooked. It is thus much simpler to just have separate workflows for compilation and (optional) checks. With compiler warnings you can certainly have a workflow that does -Werror; but once you treat CI to be as important as local development, the separate-workflow design is the simpler one - especially considering that most checks don’t need to perform a full compilation and is much faster that way.
Being an optional addition that you need to spend time to engage with makes a huge difference in how often the tool is used. Even if it’s just one command.
I feel that the Go team cares more about enabling organizational processes, rather than encouraging individual habits. The norm for well-run Go projects is definitely to have vet checks (and likely more optional linting, like staticcheck) as part of CI, so that’s perhaps good enough (for the Go team).
All of this is quite consistent with Go’s design goal of facilitating maintenance of large codebases.
And for a language with as… let’s politely call it opinionated a stance as Go, it feels a bit odd to take the approach of “oh yeah, tons of unsafe things you shouldn’t do, oh well, up to you to figure out how to catch them and if you don’t we’ll just say it was your fault for running your project badly”.
Yeah, “similar” is doing some heavy lifting there. The scale is more like: default - included - separate - missing. But I stand by my position - Rust is more to the left the than Go and that’s a better place to be. The less friction, the more likely people will notice/fix issues.
I’ll be honest, I get this complaint about it being an extra command to run, but I haven’t ever run go vet explicitly because I use gopls. Maybe I’m in a small subset going the LSP route, but as far as I can tell gopls by default has good overlap with go vet.
But I tend to use LSPs whenever they’re available for the language I’m using. I’ve been pretty impressed with rust-analyzer too.
On the thing about maps not being goroutine safe, it would be weird for the spec to specify that maps are unsafe. Everything is unsafe except for channels, mutxes, and atomics. It’s the TL;DR at the top of the memory model: https://go.dev/ref/mem
Agreed. Whenever people complain about the Rust community being toxic, this author is who I think they’re referring to. These posts are flame bait and do a disservice to the Rust community. They’re like the tabloid news of programming, focusing on the titillating bits that inflame division.
I don’t know if I would use the word “toxic” which is very loaded, but just to complain a little more :-) this passage:
go log.Println(http.ListenAndServe("localhost:6060", nil))
…
Jeeze, I keep making so many mistakes with such a simple language, I must really be dense or something.
Let’s see… ah! We have to wrap it all in a closure, otherwise it waits for http.ListenAndServe to return, so it can then spawn log.Println on its own goroutine.
go func() {
log.Println(http.ListenAndServe("localhost:6060", nil))
}()
There are approximately 10,000 things in Rust that are subtler than this. Yes, it’s an easy mistake to make as a newcomer to Go. No, it doesn’t reflect even the slightest shortcoming in the language. It’s a very simple design: the go statement takes a function and its arguments. The arguments are evaluated in the current gorountine. Once evaluated, a new goroutine is created with the evaluated parameters passed into the function. Yes, that is slightly subtler than just evaluating the whole line in a new goroutine, but if you think about it for one second, you realize that evaluating the whole line in a new goroutine would be a race condition nightmare and no one would actually want it to work like that.
Like, I get it, it sucks that you made this mistake when you were working in a language you don’t normally use, but there’s no need for sarcasm or negativity. This is in fact a very “simple” design, and you just made a mistake because even simple things actually need to be learned before you can do them correctly.
I agree that the article’s tone isn’t helpful. (Also, many of the things that the author finds questionable in Go can also be found in many other languages, so why pick on Go specifically?)
But could you elaborate on this?
evaluating the whole line in a new goroutine would be a race condition nightmare and no one would actually want it to work like that.
IMO this is less surprising than what Go does. The beautiful thing about “the evaluation of the whole expression is deferred” is precisely that you don’t need to remember a more complicated arbitrary rule for deciding which subexpressions are deferred (all of them are!), and you don’t need ugly tricks like wrapping the whole expression in a closure which is the applied to the empty argument list.
Go’s design makes sense in context, though. Go’s authors are culturally C programmers. In idiomatic C code, you don’t nest function calls within a single expression. Instead, you store the results of function calls into temporary variables and only then pass those variables to the next function call. Go’s design doesn’t cause problems if you don’t nest function calls.
At least they mention go vet so even people like me without knowing it can arrive at similar conclusions. And they also mention that he is somewhat biased.
But I think they should just calmly state without ceremony like “And yet there are no compiler warnings” that this is the compiler output and this is the output of go vet.
This also seems unnecessary:
Why we need to move it into a separate package to make that happen, or why the visibility of symbols is tied to the casing of their identifiers… your guess is as good as mine.
Subjectively, this reads as unnecessarily dismissive. There are more instances similar to this, so I get why you are annoyed. It makes their often valid criticism weaker.
I think it comes as a reaction to people valid agreeing that golang is so simple but in their (biased but true) experience it is full of little traps.
Somewhat related: What I also dislike is that they use loops for creating the tasks in golang, discuss a resulting problem and then not use loops in rust - probably to keep the code simple
All in all, it is a good article though and mostly not ranty. I think we are setting the bar for fairness pretty high. I mean we are talking about a language fan…
Agree. The frustrating thing here is that there are cases where Rust does something not obvious, the response is “If we look at the docs, we find the rationale: …” but when Go does something that is not obvious, “your guess is as good as mine.” Doesn’t feel like a very generous take.
The information is useful but the tone is unhelpful. The difference in what’s checked/checkable and what’s not is an important difference between these platforms – as is the level of integration of the correctness guarantees are with the language definition. Although a static analysis tool for JavaScript could, theoretically, find all the bugs that rustc does, this is not really how things play out. The article demonstrates bugs which go vet can not find which are precluded by Rust’s language definition – that is real and substantive information.
There is more to Go than just some design decisions that make sense for a particular niche. It has a peculiar, iconoclastic design. There are Go evangelists who, much more strenuously than this author and with much less foundation, criticize JavaScript, Python, Rust, &c, as not really good for anything. The author is too keen to poke fun at the Go design and philosophy; but the examples stand on their own.
This is one of the major downsides of Go using Git repos as its “source of truth”. Yeah, it’s super convenient, but everything in Git is mutable with enough determination. While best practices suggest tags will only ever point to a specific commit, you can’t guarantee that based on how Git works.
Basically, while it is vastly better than it used to be, Go’s dependency management story is still not great. And I’m not smart enough to figure out how to fix it - the current interface is pretty well established and I don’t see how anyone could build a system with the same interface that functions better.
Nix and Gentoo/Portage use git repos, along with refs and hashes, and this sort of mutation wouldn’t be possible under those build systems. The solution is to pin precise versions with a lockfile.
I agree. The
go.sum
file tries to do this, but is obviously not precise enough. This, plus the discussion of how the Go Proxy works elsewhere in this thread, it is incredibly easy to break builds.Linking https://mywiki.wooledge.org/BashPitfalls, beware pitfalls such as treating filenames as lines (or in general, parsing
ls
whose output can differ between implementations).As an aside, I like
less -K
because^C
will not just cause it to exit – it will specifically exit non-zero (as opposed toq
which exits zero). Useful when I want to check a file before proceeding with a script. For example, I have a short wrapper script to build PKGBUILDs from the AUR that does (roughly):This way while reviewing the PKGBUILD, I can
q
if it looks good,^C
if I see something wrong with it, and/orv
to drop into an editor, write and quit, then proceed withq
again.Oh that’s super nifty - I didn’t know about the
-K
flag. I should probably read the manual for my pager…While this is a neat project, the copy on their homepage rubs me wrong.
I ran my own mailserver for about a year, using Mail-in-a-Box, and the issues and difficulties I ran into while running the server had basically zero association with scaling and security (at least of the server itself). All of my difficulty stemmed from keeping my IP(s) off of blacklists. I had a roughly three-month stint where I had to keep requesting my IP be removed from Microsoft’s blacklist because my IP was apparently close enough to other IPs used by spammers, and that convinced me to relegate my email to a provider (FastMail) instead of self-hosting.
But, I’m also betting this software isn’t really intended for users like me wanting to host email for a handful of users.
this is def mocking people who find mastodon hard to understand & im not sure how i feel about it.
on one hand, xe is right, e-mail/mastodon are not simple or intuitive.
on the other hand, people were expecting a global twitter-like feed and landed in a world a lot more like a bunch of mailing lists, and are rightly confused
i have sympathy for the people who were misled into believing that mastodon is a suitable twitter replacement, but i don’t think they should be convinced. what they desire and what mastodon is simply don’t align.
I came to Mastodon looking for a Twitter replacement and I am very happy with it, I don’t feel mislead at all. It’s pretty close to what I always wanted Twitter to be. Your outcome will greatly depend on what you are looking for in such a service. j3s’s comment sums it up well.
I think the biggest thing is that Mastodon isn’t what Twitter is, it’s what Twitter was. There are some people who don’t remember the early days of Twitter that are expecting Mastodon to be Twitter as it is now, and that is the source of the dissonance.
You might be right about the majority of twitter users but many people in my social bubble did everything to avoid the global algorithmic feed and worked with TweetDeck or lists to get a linear timeline.
In your opinion, what are the differences between a “global Twitter feed” and this?
in my opinion, the primary differences are that mastodon has:
mastodon is much more suited to building smaller, tighter-knit communities. once i started approaching mastodon that way, it clicked with me instantly. i found a small server that i love & relish spending my time on.
twitter always felt like a perpetual adrenaline machine from which there was no escape, and i found myself growing a lot of resentment there. the difference is night & day
for people who like being spoonfed content, following famous people/companies, and shitposting with their friends in public, mastodon is no substitute for twitter - not even close.
Twitter doesn’t have a single firehouse feed either though? Not for many years. There is the “trending” stuff if you use the web app, which isn’t very useful but I guess doesn’t suggest a small slice of posts from outside your feed.
A global Twitter-like feed doesn’t have petty bickering between server admins, poor search functionality, missing Twitter features (notably, quote retweeting (implemented in some frontends nobody uses for some reason)), and annoying UX (if I open someone’s post on their server, I have to copy their post’s URL to my server to access and retweet it).
I have used neither, but my partner was using Twitter and has been trying Mastodon. The main difference is discoverability. People, mostly, don’t use Twitter to talk to people they know, they use it to consume posts from people with particular interests. With Mastodon, if you find a server with a hub of people that are interested in something specific, then you can find a load of them but then you have to look at the people that they connect to on other servers and explicitly follow them. In contrast, Twitter will actively suggest people based on a profile that it builds of your interests (which is also how it puts people in echo chambers full of Nazis, so this isn’t a universally positive thing).
Mastodon at the moment feels like Yahoo! back in the ‘90s: it’s a fairly small curated list of topics where a lot of interesting things were simply not indexed. Twitter feels more like Google in the late ‘90s: it’s a streamlined interface that will find a much broader set of things, with a lower signal to noise ratio but a lot more signal.
I’m always uneasy when reading articles like “SQL is overengineered”, “git is complicated”, “(some other core technology) is hard”.
Especially with Prometheus query, I know I’m repeating myself but I think that PromQL, like SQL, Git, IP, PKCS, … is part of the software engineer toolbox. There should be no corner cutting, IMHO. The technology should be learned and mastered by anybody who want to qualify as a software “craftperson.” I’m more and more sadden at the lowering of the standard of my profession… But I might just have become an old man… Maybe you shouldn’t listen to me.
I’m fine with learning difficult technologies, but PromQL just seems poorly designed. Every time I touch it and try to do something well within the purview of what a time series database ought to be able to do, it seems there isn’t a good way to express that in PromQL—I’ll ask the PromQL gurus in my organization and they’ll mull it over for a few hours, trying different things, and ultimately conclude that hacky workarounds are the best case. Unfortunately it’s been a couple of years since I dealt with it and I don’t remember the details, but PromQL always struck me as uniquely bad even worse than git.
Similarly, the idea that software craftsmen need to settle for abysmal tools—even if they’re best in class today—makes me sad. What’s the point of software craftsmanship if not making things better?
One big conceptual thing about Prometheus is that it isn’t really a time series database. It’s a tool for ingesting and querying real-time telemetry data from a fleet of services. It uses a (bespoke and very narrowly scoped) time series database under the hood, yes — edit: and PromQL has many similarities with TSDB query languages — but these are implementation details.
If you think of Prometheus as a general-purpose TSDB then you’re always going to end up pretty frustrated.
Could you expand on that more? I’m curious what features/aspects of a general TSDB you’re referring to Prometheus lacking. (This is a curiosity coming from someone with no experience with other TSDBs)
It’s not that Prometheus lacks any particular TSDB feature, because Prometheus isn’t a (general-purpose) TSDB. Prometheus is a system for ingesting and querying real-time operational telemetry from a fleet of [production] services. That’s a much narrower use case, at a higher level of abstraction than a TSDB. PromQL reflects that design intent.
I mean, I’m using it for telemetry data specifically. My bit about “ordinary time series queries” was mostly intended to mean I’m not doing weird high-cardinality shit or anything Prom shouldn’t reasonably be able to handle. I’m not doing general purpose TS stuff.
Gotcha. I’d be curious to hear a few examples of what you mean, just to better understand where you’re coming from. Personally, I’m also (sometimes) frustrated by my inability to express a concept in PromQL. In particular, I feel like joining different time series on common labels should be easier than it is. But it’s not (yet) gotten to the point that I consider PromQL to be poorly designed.
Yeah, unfortunately it’s been a while and I’ve forgotten all of the good examples. :/ Poorly designed feels harsh, but suffice it to say I don’t feel like it’s clicked and it seems like it’s a lot more complicated than it should be.
I’ve wondered about this as well – how much of the difficulty has to do with a) working with time series b) PromQL syntax c) not knowing what metrics would actually be helpful for answering a given situation d) statistics are hard if you’re not familiar or e) a combination of the above.
I’m curious if folks that have used something like TimescaleDB, which I believe uses a more SQL-flavored query syntax, have had a very different experience.
In my experience, it’s been a combination of everything you’ve listed, with the addition of (at least my) teams not always being good about instrumenting our applications beyond the typical RED metrics.
I can’t speak for TimescaleDB, but my team uses AWS Timestream for some of our data and it’s pretty similar as far as I can tell. Timestream’s more SQL-like syntax makes it both easier and harder to write queries, I’ve found. On the one hand, it’s great because I grok SQL and can write queries pretty quickly, but on the other hand I can start expecting it to behave like a relational database if I’m not careful. I’d almost rather just use PromQL or something like it to create that mental separation of behaviors.
I see the reverse. Being willing to accept poor-quality UIs is a sign of low standards in a profession. Few other skilled trades or professions [1] contain people using poorly designed tools and regard using them as a matter of pride. Sometimes you have to put up with a poorly designed tool because there isn’t a better alternative but that doesn’t mean that you should accept it: you should point out its flaws and encourage improvement. Even very simple tools have improved a lot over the last few decades. If I compare a modern hammer to the one my father had when I was a child, for example, mine is better in several obvious ways:
If carpenters had had your attitude then this wouldn’t have happened: a mediocre hammer is a technology that should be learned and mastered by anybody who wants to qualify as a “craftperson”. My hammer is better than my father’s hammer in all of these ways and it was cheap because people overwhelmingly bought the better one in preference.
Some things are intrinsically hard. Understanding the underlying model behind a distributed revision control system is non-trivial. If you want to use such a tool effectively, you must acquire this understanding. This is an unavoidable part of any solution in the problem space (though you can avoid it if you just want to the tool in the simplest way).
Other things are needlessly hard. The fact that implementation details of git leak into the UI and the UI is inconsistent between commands are both problems that are unique to git and not to the problem space.
As an industry, we have a long history of putting up with absolutely awful tools. That’s not the attitude of a skilled craft.
[1] Medicine is the only one that springs to mind and that’s largely due to regulators putting totally wrong incentives in place.
I agree with you, although I think it’s worth calling out that git has at least tried to address the glaring problems with its UI. PromQL has remained stubbornly terrible since I first encountered it, and I don’t think it’s just a question of design constraints. All the Prometheus-related things are missing what I consider to be fairly basic quality-of-life improvements (like allowing you to name a subexpression instead of repeating it 3 times).
Maybe PromQL also has limitations derived from its limited scope, but frankly I think that argument is… questionable. (It doesn’t help that the author of this article hasn’t really identified the problems very effectively, IMO.) The times I’ve resorted to terrible hacks in Prometheus I don’t think I was abusing it at all. Prometheus is actively, heavily, some might say oppressively marketed at tech people to do their operational monitoring stuff. But on the surface it’s incapable of anything beyond the utterly trivial, and in the hands of an expert it’s capable of doing a handful of things that are merely fairly simple, usually with O(lots) performance because you’re running a range subquery for every point in your original query.
As an aside, I think the relentless complaining about git’s model being hard to understand is not helping in either dimension. Saying “DVCS is too hard, let’s reinvent svn” doesn’t stop DVCS being useful, but it makes people scared to learn it, and it probably makes other people think that trying to improve git is pointless, too.
This is a very interesting point. I hear you in the general case (and I’ll also say that actually working more with PromQL has given me a lot of respect for it).
I think it’s easier to make that argument for tools that people use on a daily or at least very regular basis. Depending on the stage of company you’re at, to what extent your job involves routinely investigating incidents, etc, PromQL may be something you reach for more or less frequently. It’s quite a different paradigm than a lot of other programming tools, so it makes sense to me that engineers who are new to it or don’t use it frequently would have a hard time. Also, speaking as someone who learned it pretty recently, the materials for learning it and trying to get to a deeper level of understanding of what you can and might want to do with it are…sparse.
I think you nailed it - in many cases you don’t touch Prometheus until you’re investigating some issue and that’s often when it’s urgent, and doing so using an unfamiliar query language is a recipe for pain. Of course, you could set aside some time to learn it, but if a lot of time passes until you need it again, those skills will have faded again.
git is hard to learn compared to some of its competitors but has become ubiquitous enough, and once you start using it daily you will learn it properly in no time. Learning additional stuff about it becomes easier too once you have a good foundation and it will stick around better, as well. For SQL I’d argue the same - at uni I almost flunked my SQL course due to simply not grokking it, but I’ve worked with it so much that I’m currently one of the company’s SQL experts.
I’ve been away from the Go world for a few years now. I was excited when
go fuzz
came out; I missed the introduction ofgo mod
and generics. I have confidence that they are well-designed, but I find the documentation… impenetrable.The article says
go mod init
is enough. Not sure what it does, but fine. We do loads of$TOOLCHAIN_NAME init
these days.The term “module” is overloaded in the industry: is my application a module, or does it have modules? If my code is supposed to be modular, then maybe the latter? Let’s check the official docs. My options are either this 79-page-long reference, or a series of blog posts from four years ago that may or may not be up-to-date.
I guess at least that tells me what the motivation for this was, but this sort of thing is why I don’t feel comfortable going back to Go, until another edition of the blue book comes out.
Yeah, I’ve read all of that Go mod stuff, so I’ve internalized it, but it’s a lot. 😂 It doesn’t help that Go modules work significantly differently than other systems so if you just assume how it should work, you’ll probably assume wrong. That said, it’s actually pretty easy to use in practice and I can’t say I’ve run into any problems with it.
A quick summary:
If you just want to run one file that just uses the standard library, you can just do
go run file.go
. But if you want to have a collection of files that import other collections of files, you need to explain to the Go compiler how all the files relate to each other. In Go, a “package” is one folder of .go files and a “module” is a collection of packages. (This unfortunately is the opposite of Python, where a “module” is one file and a “package” is a folder or a collection of folders.)To start a Go project, run
go mod init name
and it will write out a go.mod file for you, or you could just handwrite a go.mod file. What should “name” be? Well, the name is going to be the import root for your project. If you’re not going to publish your project online, it should probably be something likego mod init janedoe/fooproj
and then in your packages, you would doimport "janedoe/fooproj/utils"
to import your utils folder. When you start a spam project, you can dogo mod init janedoe/spam
etc. It’s probably not a good idea to just dogo mod init bar
because if some future version of Go adds “bar” to the standard library, you would have to change your module name. OTOH, you can just fix it when that happens. If you are going to publish your project online, the name should be a URL where the project can be downloaded. In most cases, one code repository should be one module and vice versa, although you are technically allowed to have multiple modules in a repository. If you need to work across multiple code repositories, there’s a “workspaces” mode for it. You probably won’t need it.The go.mod file contains the name of your project (module) and the names and version numbers of any third party packages you want to import. There’s a separate go.sum file that contains checksums for the modules you import. To add an import to your project, you can just add the import to a .go file like
import "rsc.io/sampler"
and then rungo mod tidy
and Go will download the project and add it to go.mod for you along with the checksum in go.sum. Alternatively, you can rungo get URL@v1.2.3
and the import will be downloaded and added to go.mod as a pending import. Be sure to rungo mod tidy
again after you use the import in your project, so it’s moved to the list of used imports.Versioning is a little quirky. It just uses normal Git tags with semantic versioning, but when a project reaches v2+, the import path for a project has to include /v2 or whatever in it. The idea is that you are allowed to import github.com/whatever/project/v2 and github.com/whatever/project/v3 simultaneously, but you can’t import v2.0.1 and v2.0.3 simultaneously, so the major version number goes into the import path to be explicit about which one you mean. It’s a little ugly, but causes no problems if you don’t try to fight it. (Like a lot of Go.)
And that’s basically enough to get started. The rest you can just look up in the go tool help and Wikis as needed.
This is tremendously helpful, thank you!
How much is
go get
and the toolchain in general coupled to Git as a VCS? IIRC, you couldgo get
from HTTP endpoints in the past. I see that, in thersc.io/sampler
example you gave, there’s this line:…which presumably is what bridges from HTTP to git to serve the content.
But does versioning work in the absence of git?
Go also supports Mercurial, SVN, Bazaar, and Fossil. There’s some process by which the VCS calls can be replaced with HTTP calls for efficiency. See https://go.dev/ref/mod#module-proxy Trying to understand all the specifics is how you end up with impenetrable 100 page docs. Mostly you can just ignore it assume it works somehow.
Okay, so wait, you’re supposed to keep all sources in your repo?
If I understand the question, no. Keeping your sources in your repo is called vendoring. There’s a mode for it in the go tool, but by default your sources are kept in a user cache directory somewhere.
By git tag, I mean to publish a library, you just add a Git tag to your library repo and then consumers can just specify that they want that version. The versions must be semver style and when a library publishes v2.3.4, a consumer would import it with
import "whatever/v2"
.You don’t have to, but it’s supported and encouraged - and I’ve been so pleasantly surprised how useful an approach it is that I’ve adopted it for projects in other languages also.
go mod init
just creates a file at the root of the current directory calledgo.mod
, containing the name of your module (either pre-determined if you’re still creating projects in $GOPATH or specified when you run the command) and the Go version used to develop the module (i.e. the version of Go that is run when you use thego
command to do things - this matters for the compiler mostly).As for if your application is a module or has modules - the answer is yes. It’s both. If you publish an application written in Go publicly that has a
go.mod
file, I can import it as a module into my own application.Other than that, modules are confusing and probably one of the more painful parts of Go, for me at least.
A module is a thingy that lives in one place (probably git), that contains one or more Go packages, that has a version, and that can depend on and be depended on by other modules. That’s about all there is to it. You don’t have to do all that much to interact with them, particularly if you’re writing an app — just
go mod init
when you start (just like this thing says), and then whatever yougo get
while inside your app’s directory structure will be local to your app, and recorded in yourgo.mod
so that you’ll get the same set of deps if you build it on another machine.A response article from Evert Pot: https://evertpot.com/oauth2-usability/
OAuth is an overengineered mess. Like he says:
I think a custom flow like mentioned here is a better use of my time than trying to figure out how to implement OAuth2. And in many cases, developers using the API would probably agree. Debugging a client is painful enough with OAuth2 in part because many clients try to abstract over the messy details (because there are so many such details).
So why would someone implement OAuth2 over a custom thing? The API itself will be custom too, so what’s the big deal in using a standardized authentication flow?
I think what’s missing is a great documentation on how to cover common case with oauth2. There’s so little good ones that you’re left piecing together many resources and create a mess yourself.
Isn’t that the core argument of all of this? The real issue with OAuth2 is that you never know which flavor (custom extensions) you’re getting. Not every server supports the discovery endpoints, some servers don’t fully implement the spec, some only support the spec if Mercury is in retrograde and Saturn is ascendent, &c. That’s where the pain comes in as a client, all those messy little details. The core is fine, but you start loading it down with optional boondoggles and suddenly the paved path becomes a maze whose branches have varying levels of unkemptness.
So instead of going with an OAuth2 provider (assuming you don’t “get it for free” like if your company uses Okta for SSO), you decide to write your own auth server. You could build a spec-compliant OAuth2 server (seriously, read the RFCs, it’s not terrible to do), or you could whip a custom thing that issues JWTs that contain exactly the data your stuff needs to do the actual money making. Building the OAuth2 server only makes sense when there’s a strong business need for it and you know there’s going to be a need for external parties to authenticate in the exact same way as internal parties and you care about being able to point to a “standard” spec to save on documentation.
It’s quite pretty, but the choice of “#” to mark headings strikes me as odd. Is it an attempt to appeal to Markdown fans?
For actual book mode and reading of long form text I’d also love to have non-proportional fonts.
That’s also the character for indicating a line of text is a heading in Org’s markup.
It is not. Headings in Org mode are indicated with
*
, cf https://orgmode.org/manual/Headlines.htmlThis is all about laptops. I understand a lot of people really need a portable computer, but I think it is worth considering what it is asking to ask for a computer that you can take everywhere with you and it still lasts for 50 years. The desktop hardware I currently have will probably last decades if looked after, some minor parts might fail and need to be replaced but the big expensive components should last a good long time. It also does not suffer from many of the drawbacks mentioned here.
A laptop has to handle impacts, being dropped, being sat on, getting wet, getting hot, getting cold, in 50 years of constant use all of these will happen. Don’t get me wrong I would love to see a ruggedised laptop on the market that could last 50 years but I probably wouldn’t rush out to buy it. The extra cost and weight would probably be significant, and I would want it to be constructed according to the principles of ‘right to repair’ and upgradeable in a modular way which is a whole other layer of problems.
In short, while I agree with pretty much everything in the article, it sounds like a description of a desktop pc running open source software, written in a parallel universe where no such thing exists.
Man, remember netbooks? One of those, a wireless hotspot and a VPN/tunnel of some sort to a home desktop used to be my dream computing setup. I get the unbridled compute of a desktop-class processor with a super-portable method of access.
Granted, this would not have been great from the perspective of screen real estate, but I’m kind of sad there don’t seem to be any devices like that any more.
The ones I had experience with had really crappy hardware. Awful trackpads, mushy keyboards, dingy displays. And they broke a lot.
Get an iPad and an attached keyboard, install an SSH terminal, and you’ve basically got what you’re asking for.
It’s still an option. You can still buy tiny laptops for less than £100 or connect a keyboard to your smartphone or tablet.
I haven’t watched the video yet, but the answer is
fmt.Errorf("custom message goes here: %w", err)
. The%w
allows the error to be unwrapped usingerrors.Unwrap
, but you also get to add your own context if you want/need it.in small systems, this is definitely all you need, larger more complex applications definitely benefit more from structured errors and structured logging.
That is one good approach. But there are reasons for others, too.
While I understand that any private entity can choose its customers and change its terms at any time, I think more thought should be put into these kinds of decisions…
While I completely agree that any “threatening or harassing” or any other kind of illegal activity or content should be dealt with, I don’t think blockchain and crypto-currency related source code falls under any of these categories. Sure, many blockchain and crypto-currency “projects” might actually be financial ponzi schemes and detrimental to both the society and the environment, but banning all such related projects is quite an overreaction…
For example, on an unrelated project I’ve searched for some technical solutions and found two outcomes of crypto-currency projects that are just extremely original and useful even outside the crypto-currency context: https://github.com/BlockchainCommons/LifeHash and Bech32 encoding. Would the code for such projects be banned from SourceHut? (Perhaps not if hosted individually outside a blockchain or crypto-currency repository, but who knows, their main purpose is for crypto-currencies.)
Moreover it creates a precedent… If the owners of SourceHut tomorrow decide they don’t like say AI-based image generation projects will they ban those next? Or perhaps (open-source) projects that when run display ads? Where does the banning stops once it gets started?
While I don’t use (pay for) SourceHut myself (thus I perhaps shouldn’t have an opinion about these changes) I did keep a close eye on SourceHut developments as an alternative to GitHub, but this change in their terms made me think twice about possibly moving there…
As said in the beginning, it’s their company thus it’s their decision…
The precedent to ban projects already exists across providers. They generally have terms of service that excludes certain kinds of content. Take the child abuse example from the post. This isn’t as slippery of a slope as you make it seem, and as you already pointed out they can choose their customers and their terms. It just sounds like this choice is one you disagree with.
Look what happened when the media industry got mad at
youtube-dl
for “assisting piracy” or whatever–when it covers so many use cases. Meanwhile, piracy is often the best archival system we have for digital media as those corps let it rot.Drew definitely had put thought into it. I participated in the chat on IRC.
I am curios what the thought process was? Which were the highlighted advantages and disadvantages of such a decision (both for the SourceHut business and for the development community at large)? How much good would banning any crypto-currency and block-chain related project achieve? (Are there many such projects hosted on SourceHut?)
The only reasoning behind this decision is given in the beginning of the article:
Not to mention the following suggestion:
Getting back to the listed reasons:
I don’t know anything more about SourceHut’s reasoning than you - I’ve just read the same announcement - but it seems to me that it isn’t being banned because of the uses you mentioned, it’s because those are (nearly) the only uses. So the fact that other technologies can also be used for those purposes isn’t really relevant.
I don’t want to misrepresent Drew’s position, I think it’s best if you ask him personally by e-mail.
I think this is more than fair and also answers your question. They are specifically mentioning the exact target of these terms, and they are the projects which you call ’“threatening or harassing” or any other kind of illegal activity or content ’.
He makes exceptions, like for example my project, see: https://news.ycombinator.com/item?id=33404815
I could see those two projects being “unbanned” or whatever if they filed an appeal with Drew. I know in the past they (Sourcehut) had issues with crypto-related repositories using their CI service to mine, and ended up removing access to CI for non-paying members.
I pretty much agree with the decision they made, because if something is going to be negative 99% of the time, I’m just going to automate around most cases being negative while providing an out for the 1% chance it’s positive.
Regardless of how someone feels about these changes, they seem to be well implemented and alternatives readily provided through the use of standard formats. It’s nice to see these sorts of changes being communicated clearly and with plenty of time.
I especially like the “and if you don’t like it, here’s how you can take all your data with you when you go”
This kind of grown-up attitude & approach is alone sufficient in significantly raising my interest in the platform.
It’s a really nice platform. I use it exclusively for personal projects now, and I’m loving it. I haven’t done much collaboration on the platform, so I can’t say much about that, but otherwise it’s great.
I know Drew kind of built a reputation for himself, and think what you want of him, but he’s doing right by FOSS with Sourcehut, I feel.
This is really timely - I’ve been wanting to set up a Nomad cluster for my personal servers, and it’ll be nice to have this to reference alongside my own research and Hashicorp’s own documentation.
Edit: now that I’ve had time to read the full article, it’s definitely something I’ll use as a guide. Hashicorp’s documentation has been hard to sift through for actually setting all of their services up together, so it’s really helpful to see this basically say “start with Consul, then Nomad, and then Vault”.
Depends on what you want - performance, go for a 16” M1 MacBook Pro. Anything else, go for an M2 MacBook Air.
IMO, it is a bit more clouded than that. I love the Air for its low weight. But the 14/16” Pros have other nice benefits over the Air. You absolutely need the Pro if you need (besides more performance):
Besides that the Pro 14”/16” have other benefits:
Also, the price difference between the 8 CPU core/10 GPU core Air M2 and the baseline MacBook Pro 14” is very small (especially because many stores have discounts on the 14” now).
I’d get the Air if you prefer the lower weight and passive cooling over anything else.
Oh man, that bit on Vault Sidecar Injector hit hard. My team uses Vault (self-hosted, so you can guess where this is going), and it’s basically a glorified key-value store. We talk about using the value-adds that Vault brings, but never make progress because instead of letting one person dive deep and become an expert, we shuffle ticket assignments around and let someone learn just enough to say “yeah, this is feasible but I don’t know how to get us there”.
Not super interesting I reckon. But the ones I use the most.
I don’t really have a lot of super useful aliases. Most of the time is spent making git config aliases and vim stuff.
Is that BSD or a mac? My
ip
doesn’t know either flag.That is
iproute2
. The goal is to have ip be brief by default because I simply do not care about all the information.Thanks, but that’s why I was asking:
ip -br
doesn’t work and neither doesip -c
orip -br -c
.And now I finally grasped that it would output the same as
'ip'
- so it is-brief -color
. Sorry, brainfart (ofc I searched before I asked that last question..) :PApparently I have never used either flag and didn’t notice them in the manual. – signed, someone gowing up with ifconfig
Yeah I found the documentation for these commands a bit lacking when I started utilizing them initially.
I was going to post this as my most useful, because I just it all the time.
I also have
...="cd ../.."
and so on for going up more levels. Probably not useful beyond four or five levels due to how quickly can you count how deep you are in the CWD.Edit: Just to be clear, I’m talking about me visually counting up how many levels deep I am in the directory tree. Beyond three or four, I tend to just go up that much, and then look and see where I am and maybe do it again, with my finger hovering over the ‘.’ key. I don’t have a problem rapidly tapping out the ‘.’ nine times to go up 8 levels, the difficulty (for me) is determining that I want to go up 8, vs. 7 or 9 levels.
Don’t want to keep posting it so I’ll link to the reply i made to the parent:
https://lobste.rs/s/qgqssl/what_are_most_useful_aliases_your_bashrc#c_fqu7jd
You might like to use it too!
You (and others) might be interested in the one from my post to this:
Allows you to just do
up 4
to getcd ../../../..
LIFE-saver.
Even more fun (only works in zsh, as far as I know):
You can make this even better by adding
setopt auto_cd
to your config, so that if you type a directory path zsh automatically changes to that directory.I tend to use https://github.com/wting/autojump for smart CDing, personally!
Interesting. I’ve never tried to install / use those “smart” cd replacements, where you can type “cd foobar” and it looks at your recent working directories to find a “foobar” and go there.
I was thinking about a variant of your
up
function that does something like that, where I can type “up foo” in the current directory:And so it just looks into successive parent directories for anything matching “foo”, and the first one it finds is the destination.
oh man – just use Autojump https://github.com/wting/autojump. I use it on every machine i own and it’s a GODSEND.
That was what I was talking about. I’ll have to give it or maybe zoxide a try and see if I stick with it.
I really like how Go’s Duration time handles this, letting us do
time.Sleep(5 * time.Second)
Unfortunately
Duration
is not a type, but an alias for an integer, so this mistake compiles:Your point stands about the mistake, but just to clarify the terminology:
Duration
is a defined type, not an alias (alias is specific Go terminology which means it behaves exactly the same as that type). The reason this mistake compiles is because literals in Go are “untyped constants” and are automatically converted to the defined type. However, these will fail, becauses
andt
take on the concrete typeint
when they’re defined:My understanding is that Duration*Duration is also allowed?
The thing I dislike the most about Go’s Duration type is that you can’t multiply an int by a Duration:
In the example above, the intent is somewhat clear due to the
seconds
variable name, but if you just want to have something like this:You have to convert
some_multuplier
totime.Duration
, which doesn’t make sense!Can’t you just overload the
*
operator?Go doesn’t allow for operator overloading, which I’m kind of okay with. It tends to add complexity for (what I personally consider to be) little benefit.
On the other hand, this is the kind of case that really makes the argument for operator overloading. Having to use a bunch of alternate specific-to-the-type function implementations to do common operations gets tiresome pretty quickly.
So Go has different operators for adding floats and for adding integers? I have seen that in some languages, but it’s nevertheless quite unusual. OTOH, I can see that it reduces complexity.
Go has built-in overloads for operators, but user code can’t make new ones.
It’s similar to maps (especially pre-1.18) that are generic, but user code is unable to make another type like map.
Go doesn’t have operator overloading
I agree it is annoying. Would a ‘fix’ be to alter/improve the type inference (assuming that some_multiplier is only used for this purpose in the function) so that it prefers time.Duration to int for the type inferred in the assignment?
I’m not sure it would be an incompatible change - I think it would just make some incorrect programs correct. Even if it was incompatible, maybe go2?
While I do think Go could do more to work with underlying types instead of declared types (
time.Duration
is really just an alias forint64
, as atime.Duration
is just a count of nanoseconds), it does make sense to me to get types to align if I want to do arithmetic with them.My perfect world would be if you have an arbitrary variable with the same underlying type as another type, you could have them interact without typecasting. So
would be valid Go. I get why this will probably never happen, but it would be nice.
That’s how C-based languages have worked, and it’s a disaster. No one can keep track of the conversion rules. See https://go.dev/blog/constants for a better solution.
If you define
some_multiplier
as a constant, it will be an untyped number, so it will work as a multiplier. Constants are deliberately exempted from the Go type system.I’m hoping to do a lot of digital housecleaning this week - I want to migrate my self-hosted email to Fastmail (unless there’s some compelling reason to go somewhere else). I want to finally figure out my org-mode setup. I’m also going to start rewriting my blog using Remix for learning purposes. It’s built using Hugo right now, and I’m just not a fan of how Hugo works (really I’ve just struggled to get my site to look the way I want and if I build my own I will hopefully be able to do that better).
On a personal fitness note, last Thursday I did my first toes-to-bar, and I’m going to continue practicing that.
Fastmail is phenomenal. I’ve been a customer for ~3 years now and am happy as a clam. Plus it just keeps getting better and better.
I mean, their docs are surprisingly good. They’re all I’ve really needed. To start, I’d suggest reading up on IAM, ECS, and probably S3. From there, just read the docs on whatever you need as you realize you need it.
EDIT: Since you’ve mentioned wanting a course, they offer courses you can take at your own pace: https://explore.skillbuilder.aws/learn
I find the complaints about Go sort of tedious. What is the difference between using go vet to statically catch errors and using the compiler to statically catch errors? For some reason, the author finds the first unacceptable but the second laudable, but practically speaking, why would I care? I write Go programs by stubbing stuff to the point that I can write a test, and then the tests automatically invoke both the compiler and go vet. Whether an error is caught by the one or the other is of theoretical interest only.
Also, the premise of the article is that the compiler rejecting programs is good, but then the author complains that the compiler rejects programs that confuse uint64 with int.
In general, the article is good and informative, but the anti-Go commentary is pretty tedious. The author is actually fairly kind to JavaScript (which is good!), but doesn’t have the same sense of “these design decisions make sense for a particular niche” when it comes to Go.
A big part of our recommendation of Rust over modern C++ for security boiled down to one simple thing: it is incredibly easy to persuade developers to not commit (or, failing that, to quickly revert) code that does not compile. It is much harder to persuade them to not commit code where static analysis tooling tells them is wrong. It’s easy for a programmer to say ‘this is a false positive, I’m just going to silence the warning’, it’s very difficult to patch the compiler to accept code that doesn’t type check.
One is optional, the other one is in your face. It’s similar to C situation. You have asan, ubsan, valgrind, fuzzers, libcheck, pvs and many other things which raise the quality is C code significantly when used on every compilation or even commit. Yet, if I choose a C project at random, I’d bet none of those are used. We’ll be lucky if there are any tests as well.
Being an optional addition that you need to spend time to engage with makes a huge difference in how often the tool is used. Even if it’s just one command.
(According to the docs only a subset of the vet suite is used when running “go test”, not all of them - “high-confidence subset”)
When
go vet
automatically runs ongo test
, it’s hard to call it optional. I don’t even know how to turn if off unless I dig into the documentation, and I’ve been doing Go for 12+ years now. Technicallygofmt
is optional too, yet it’s as pervasive as it can be in the Go ecosystem. Tooling ergonomics and conventions matter, as well as first party (go vet) vs 3rd party tooling (valgrind).That means people who don’t have tests need to run it explicitly. I know we should have tests - but many projects don’t and that means they have to run vet explicitly and in practice they just miss out on the warnings.
Even in projects where I don’t have tests, I still run
go test ./...
when I want to check if the code compiles. If I usedgo build
I would have an executable that I would need to throw away. Being lazy, I dogo test
instead.Separating the vet checks from the compilation procedure exempts those checks from Go’s compatibility promise, so they could evolve over time without breaking compilation of existing code. New vet checks have been introduced in almost every Go release.
Compiler warnings are handy when you’re compiling a program on your own computer. But when you’re developing a more complex project, the compilation is more likely to happen in a remote CI environment and making sure that all the warnings are bubbled up is tedious and in practice usually overlooked. It is thus much simpler to just have separate workflows for compilation and (optional) checks. With compiler warnings you can certainly have a workflow that does
-Werror
; but once you treat CI to be as important as local development, the separate-workflow design is the simpler one - especially considering that most checks don’t need to perform a full compilation and is much faster that way.I feel that the Go team cares more about enabling organizational processes, rather than encouraging individual habits. The norm for well-run Go projects is definitely to have vet checks (and likely more optional linting, like staticcheck) as part of CI, so that’s perhaps good enough (for the Go team).
All of this is quite consistent with Go’s design goal of facilitating maintenance of large codebases.
Subjecting warnings to compatibility guarantees is something that C is coming to regret (prior discussion).
And for a language with as… let’s politely call it opinionated a stance as Go, it feels a bit odd to take the approach of “oh yeah, tons of unsafe things you shouldn’t do, oh well, up to you to figure out how to catch them and if you don’t we’ll just say it was your fault for running your project badly”.
The difference is one language brings the auditing into the tooling. In C, it’s all strapped on from outside.
Yeah, “similar” is doing some heavy lifting there. The scale is more like: default - included - separate - missing. But I stand by my position - Rust is more to the left the than Go and that’s a better place to be. The less friction, the more likely people will notice/fix issues.
I’ll be honest, I get this complaint about it being an extra command to run, but I haven’t ever run
go vet
explicitly because I usegopls
. Maybe I’m in a small subset going the LSP route, but as far as I can tellgopls
by default has good overlap withgo vet
.But I tend to use LSPs whenever they’re available for the language I’m using. I’ve been pretty impressed with
rust-analyzer
too.On the thing about maps not being goroutine safe, it would be weird for the spec to specify that maps are unsafe. Everything is unsafe except for channels, mutxes, and atomics. It’s the TL;DR at the top of the memory model: https://go.dev/ref/mem
Agreed. Whenever people complain about the Rust community being toxic, this author is who I think they’re referring to. These posts are flame bait and do a disservice to the Rust community. They’re like the tabloid news of programming, focusing on the titillating bits that inflame division.
I don’t know if I would use the word “toxic” which is very loaded, but just to complain a little more :-) this passage:
…
There are approximately 10,000 things in Rust that are subtler than this. Yes, it’s an easy mistake to make as a newcomer to Go. No, it doesn’t reflect even the slightest shortcoming in the language. It’s a very simple design: the
go
statement takes a function and its arguments. The arguments are evaluated in the current gorountine. Once evaluated, a new goroutine is created with the evaluated parameters passed into the function. Yes, that is slightly subtler than just evaluating the whole line in a new goroutine, but if you think about it for one second, you realize that evaluating the whole line in a new goroutine would be a race condition nightmare and no one would actually want it to work like that.Like, I get it, it sucks that you made this mistake when you were working in a language you don’t normally use, but there’s no need for sarcasm or negativity. This is in fact a very “simple” design, and you just made a mistake because even simple things actually need to be learned before you can do them correctly.
In practice, about 99% of uses of the
go
keyword are in the formgo func() {}()
. Maybe we should optimize for the more common case?I did a search of my code repo, and it was ⅔
go func() {}()
, so you’re right that it’s the common case, but it’s not the 99% case.I agree that the article’s tone isn’t helpful. (Also, many of the things that the author finds questionable in Go can also be found in many other languages, so why pick on Go specifically?)
But could you elaborate on this?
IMO this is less surprising than what Go does. The beautiful thing about “the evaluation of the whole expression is deferred” is precisely that you don’t need to remember a more complicated arbitrary rule for deciding which subexpressions are deferred (all of them are!), and you don’t need ugly tricks like wrapping the whole expression in a closure which is the applied to the empty argument list.
Go’s design makes sense in context, though. Go’s authors are culturally C programmers. In idiomatic C code, you don’t nest function calls within a single expression. Instead, you store the results of function calls into temporary variables and only then pass those variables to the next function call. Go’s design doesn’t cause problems if you don’t nest function calls.
At least they mention
go vet
so even people like me without knowing it can arrive at similar conclusions. And they also mention that he is somewhat biased.But I think they should just calmly state without ceremony like “And yet there are no compiler warnings” that this is the compiler output and this is the output of
go vet
.This also seems unnecessary:
Subjectively, this reads as unnecessarily dismissive. There are more instances similar to this, so I get why you are annoyed. It makes their often valid criticism weaker.
I think it comes as a reaction to people valid agreeing that golang is so simple but in their (biased but true) experience it is full of little traps.
Somewhat related: What I also dislike is that they use loops for creating the tasks in golang, discuss a resulting problem and then not use loops in rust - probably to keep the code simple
All in all, it is a good article though and mostly not ranty. I think we are setting the bar for fairness pretty high. I mean we are talking about a language fan…
Agree. The frustrating thing here is that there are cases where Rust does something not obvious, the response is “If we look at the docs, we find the rationale: …” but when Go does something that is not obvious, “your guess is as good as mine.” Doesn’t feel like a very generous take.
the author has years of Go experience. He doesn’t want to be generous, he has an axe to grind.
So where’s the relevant docs for why
or
This is simply not true. I’m not sure why the author claims it is.
This is Go fundamental knowledge.
Yes, I’m talking about the rationale.
https://go.dev/tour/basics/3
rationale, n.
a set of reasons or a logical basis for a course of action or belief
Why
func
and notfn
? Why are declarationsvar type identifier
and notvar identifier type
? It’s just a design decision, I think.https://go.dev/ref/spec#Packages
The information is useful but the tone is unhelpful. The difference in what’s checked/checkable and what’s not is an important difference between these platforms – as is the level of integration of the correctness guarantees are with the language definition. Although a static analysis tool for JavaScript could, theoretically, find all the bugs that
rustc
does, this is not really how things play out. The article demonstrates bugs whichgo vet
can not find which are precluded by Rust’s language definition – that is real and substantive information.There is more to Go than just some design decisions that make sense for a particular niche. It has a peculiar, iconoclastic design. There are Go evangelists who, much more strenuously than this author and with much less foundation, criticize JavaScript, Python, Rust, &c, as not really good for anything. The author is too keen to poke fun at the Go design and philosophy; but the examples stand on their own.