I thought this was an interesting compare-and-contrast of Python package management with Node and C#. Being a relative newbie to Python packaging, I’m never 100% sure if something is confusing because I haven’t invested the time to learn it yet, or if it’s confusing because it’s actually more complex than it needs to be. The author echos some frustrations I’ve personally felt but couldn’t quite articulate myself.
The biggest surprise to me was that the author argued against virtualenvs! I’ve always assumed that venvs were a necessary evil, and that the way to address their rough edges was to use a tool that manages them for you (like pipenv or Poetry). This blog article is the first place I’ve heard of PDM or PEP 582 — my inclination is to go with the crowd and stick with Poetry (or any more-popular tool that displaces it), but I wish luck to everyone involved in replacing the venv model with something simpler.
They currently are a necessary evil, because something like PEP 582 requires a bunch of buy-in that virtual environments don’t. They were a solution to a problem that didn’t require any action from anyone but the user.
They were a solution to a problem that didn’t require any action from anyone but the user.
That’s a very good description of what went wrong with Python packaging.
I don’t disagree! Mind you, virtualenv was a revelation back in the day. It’s just a pity the core team have a history of what is at best benign neglect when it comes to packaging, which lead to the setuptools debacle, and how it was left to rot for years after PJE became a motivational speaker. A lot of the problems with the Python ecosystem can be traced to that abandonment.
My observations on this:
I’d be happy if UCL became more popular, or a derivative of it like HCL.
The changes aren’t for general consumption, though some of them, such as biased reference counting, could probably be a boon to Python both with and without the GIL, after the root causes of the 10% slow down are resolved.
I see this as a stalking horse for a possible future PEP that introduces an improved C API, possibly along the lines of that of Lua, which would be a good thing.
At least for one of my blogs, there’s no duplication: the feed orders by modification date, while the main page of the blog orders by creation date. Thus feed subscribers get to see changes as they happen unlike those who just visit the homepage.
Feeds and homepages serve different purposes.
This would largely fix one of my great bugbears in the language. It still has a bunch of issues, though: it doesn’t look to cover enumerations, and ideally there’d be a match
statement of some kind to allow for pattern matching, which this proposal lacks. I’ve seen a few other proposals for sum types, and hopefully some of the ideas from those might make it into a revised version of this.
I think this proposal strikes the balance between uprooting the type system in Go and doing nothing. The fact that sum types would be interfaces means that most Go code would be unaffected and would seamlessly be able to support the use of sum types with existing semantics, but with the benefit that type authors can define types more accurately.
I think the biggest issue is that this won’t bring about an Option<T>
-like revolution to Go, since all sum-types could be nil
. But, I think it might be a worthy trade-off.
It’s not clear how this is much better in practical terms (leaving aside the fact that it blessedly avoids using XML-RPC) than the older methods such as Pingback and Traceback from twenty years ago.
Webmentions is an iteration of Pingbacks/Tracebacks so improvements are iterative as well.
The sibling comment explains it quite well, I just want to add that the options you mention and WebMentions are not mutually exclusive. One can support both on their website.
Ok so maybe I shouldn’t be writing this comment, but honestly, ever since working for Google post-covid, I just can’t take anything they put out seriously anymore. That company is so broken I just don’t trust the quality of the engineers that work there. Like, the Google brand name is a negative for me at this point. There’s way better resources for this kind of thing.
I think, as you say, that you should not be writing this comment; I can’t see how this is valuable:
Surely it is useful in conveying the probable quality of the guide, sparing people wasted days. I came here to see if anyone had opinions on the quality, because I looked at the Kubernetes code some years back, was shocked at the poor quality, and dumbfounded that people were running their businesses on it - so if it says google, I want opinions on quality before getting involved. Given increasing consensus on google I will skip this guide.
No, it’s not. Google is an enormous company. There is always going to be a range of quality.
Maybe OP was in a bad team. Maybe OP has a sour taste in their mouth. Maybe you have misjudged Kubernetes because it does very much appear to work for enterprise level companies.
I think it’s a bad way to go to dismiss an artifact from a large company out of hand simply because of the quality of something else. I have no love for Windows but I think Microsoft is killing it with Xbox.
Half of the reason anyone cares about this guide at all is because it is coming from Google.
It’s very disingenuous to say “don’t say negative things about Google, it’s not relevant” when a major reason we are discussing this guide in the first place is because of Google’s positive reputation.
I’ve heard some similar things about Google in the past several years; that said, do you think that any issues with the quality of Google engineers broadly-speaking is relevant to this Rust guide? I skimmed it and it seemed like a reasonable enough way to get programmers who have no prior Rust experience up to speed with the language reasonably quickly. I didn’t notice any major problems with it, certainly.
I didn’t read it. I’m not saying the guide is bad. It might be good. I’m just not willing to give it a chance, there are other guides out there that are not associated with Google.
Maybe I’m wrong in this instance, but I only have one life to live. I need some kind of filter.
Thanks for looking at the course! If you do find problems, please don’t hesitate to open issues and/or to send PRs my way via https://github.com/google/comprehensive-rust.
That company is so broken I just don’t trust the quality of the engineers that work there
How is this related to the quality of the linked Rust guide?
There’s way better resources for this kind of thing.
Would you mind sharing those? Otherwise your comment is not very helpful.
Thanks for being brave enough to say this. I feel obligated to back you up even though this might not be a great idea for me either.
Where to begin? A few snippets: I worked at Apple with a former senior (staff?) Google engineer who had been internally certified at Google as a Python style reviewer.
His code reviews were the absolute worst I have ever seen: he’d leave a dozen comments about the order in which variables were declared, and zero comments about actual engineering. For points of reference: at Mozilla people would leave lots of comments about style, but left even more about substance. At Figma people had a culture of leaving style to automatic tools as much as possible and had an extreme focus on substance.
The same guy refused to use an autoformatter. His friend put up a nontrivial PR at noon, he rubber stamped it literally 5 minutes later, then they complained to our manager about how I didn’t review it. I could share so much more that’s even worse from my experience with this and other Googlers, but it wouldn’t be relevant to programming language style guides and code review.
Lots of excellent people work at Google. But Google also has a subculture of cliquish freeloaders who obsess over style and language guides to obscure the amount of work that they’re actually doing. It’s not that they do no work at all: maybe 30% of their work and reputation comes from style guides and politics rather than actual work. But they will simultaneously act as if that gives them more authority rather than less.
In college I was a fan of the Google Python Language Guide and others, but after seeing the culture they bring along firsthand I now think they’re a red flag.
There must an English idiom about how something (not everything) which looks nice from far away look like crap from close by.
You could pull one from the movie “Clueless”:
Tai: Do you think she’s pretty?
Cher: No, she’s a full-on Monet.
Tai: What’s a monet?
*Cher: It’s like a painting, see? From far away, it’s OK, but up close, it’s a big old mess. Let’s ask a guy. Christian, what do you think of Amber?
Christian: Hagsville.
Cher: See?
Yes. It’s “Laws are like sausages, it is better not to see them being made”. i.e. having insider information about some things makes you dislike them.
Historically, this is the spot where I would have mentioned and recommended tox, but at the moment I’m exploring alternatives to it, such as nox, and as a result I don’t currently have a specific recommendation for such a tool.
This isn’t being down on tox. It’s just that there’s a plausible alternative in the form of nox.
I’ve been using it for years. Just for a variety of reasons I’m currently looking at what else is out there. Currently focused on seeing what I can get out of nox
. I was expecting not to like the config-via-Python-file approach, and I’m still on the fence about it, but there are other things about it that I’m liking.
For example, I run CI for all my personal open-source stuff on GitHub Actions, and my approach is to run the test suite against every supported Python version, but the other checks – linters, documentation, packaging, etc. – only against the latest Python I’m supporting (which will be 3.11 for the wave of updates I’m working on right now). So I have a bunch of environments that I only want to run on 3.11. With tox
, the Python version is not a first-class selector, which is why all the tox
CI plugins I’ve ever used require declaring a duplicate mapping of testenvs to Python versions so they can figure out which ones to select/deselect. But in nox
, the Python version is a first-class selector, which means I don’t even need a plugin. I literally have a GH Actions workflow with nox
that just does python -Im nox --non-interactive --python ${{ matrix.python-version }}
and it works, because each nox
session already declares which Python(s) it should run with.
On the other hand, the fact that nox
uses the "list", "--of", "arguments", "-style"
gets really unwieldy for some of the longer command lines. So I don’t know if I’m going to commit to it long-term. But I’m at least enjoying learning a bit about it.
The orders would flow in, they would fail the sanity checks in their accounting service, and we would have to make the code changes to adjust.
If this is the case, then they weren’t sharing a proper schema for the XML files, nor were they providing you with a tool that would run these sanity checks so you could do testing, nor had they bothered documenting the file format properly.
They had literally no right to be rude to your team, especially when their own incompetence was at the root of everything.
Not as crazy as it seems. Some of the rules are in IDNA, some are elsewhere.
If you try to actually register one of those crazy domains, you’ll tend to be blocked because the registry has a set of rules called label generation rules that govern what codepoints can be used by whom.
There’s a set of label generation rules for the root zone, which is very elaborate and prevents anyone from registering a new TLD using, say, the cyrillic letters that look like .com. You’re also prevented from doing that by other things, but the RZ-LGR ruleset exists and are the formal fundament for rejecting such a domain application.
There are also LGRs for each registry or each top-level domain. Each registry decides on its own what rules to use, most of them decide on a subset of what ICANN recommends. ICANN recommends that registries allow a subset of 39 (IIRC) scripts and suggests rules for each, most registries/TLDs allow only a few of those 39.
The rules for each cover visual appearance and more. They say things like “can’t put arabic punctuation between latin letters” and “latin r is like these other things and a domain registrant gets a unique right to all lookalike domains”. If IBM has registered “ibm” in a particular TLD, other registrants can’t use lookalike glyphs to get a lookalike domain.
When Daniel writes that “supposedly, all of those combinations can be used as IDN names and they will work” he’s right, except that if you want to use them to mimic IBM’s domains, IBM got there first and has a unique right to “your” lookalike, and if you want to use a codepoint outside the 39 considered scripts, your attempt is blocked since that script hasn’t yet been considered.
There are workarounds, of course — googIe.com with an upper-case i, ameriprisẹ.com with a speck under the e, or plain old googlesecurityteam4321423@yahoo.net. But IDN is not a weak point, it’s better defended than most.
Some TLDs don’t use LGRs. I think .ws is one of those, .tokyo maybe too. There’s also a case where a TLD registry wants to use LGRs but is caught in a web of old contracts and the upgrade to the currently recommended rule just isn’t happening. I got a runic domain in that TLD while tt’s possible (runic will never get one of those lookalike consideraton committees, so the entire script is banned in most TLDs).
And on top of the rules registries impose, registrars can impose additional ones, as some registries have poorly thought out codepoint whitelists (cough Verisign cough), and narrowing down the target language automatically can avoid the need to ask the registrant what the language is (many registries expect a language code to be sent in an extension block of the EPP create request). The less you need to ask the registrant, the better.
Tbh this seems like something browsers should be on top of - flagging urls that switch between Idn and non-idn, and showing you both versions of the url
Chrome statrted doing something like that in version 51, I think the others started around the same time. It doesn’t matter very much. Chrome’s ruleset mostly allows every domain you can register and blocks domains you can’t. (Of course you can take your own domain, make all kinds of subdomains and attract Chrome’s displeasure, but AFAICT noone cares about that in terms of either security or usabilitty.)
Love it.
Two tweaks (maybe) for the About page:
Explore this dataset with the tools of your choice to figure out the answers light the hannukah candles.
Should that say “to figure out the answers and light the Hanukkah candles”?
“to figure out the answers and light the Hanukkah candles”
Or… “to figure out the answers to light the Hanukkah candles”.
Yes, you’re right. It’s been hectic pulling this thing together, thanks for helping tighten it up!
My pleasure—and sorry to be nitpicky. I’m a teacher, and it can be tough to turn off the part of me that grades and corrects papers.
I love the idea and the ASCII art.
So, in a nutshell, K9 is being wound down in favour of Thunderbird. Yeah, I can’t see that going well…
surprising their 1.0 proposed feature list has Mastodon API support. Why not use the ActivityPub protocol?
Because ActivityPub is probably implicit. However, it’s not really a general client protocol AFAIK, and there are plenty of clients that support Mastodon’s protocol.
it’s not really a general client protocol AFAIK
It is though. ActivityPub has two sections, the client to server (C2S) one and the server to server (S2S) one. Historically services have mostly implemented the later but avoided the former for various reasons: either claiming that it’s a little under specified (which it is) or that they had their own client APIs by the time ActivityPub was usable (as is the case for Mastodon for example).
I am working on a suite of libraries (for Go) and a generic ActivityPub server that implements both.
Since Mastodon uses ActivityPub for server-to-server communication only, nearly all the clients created use the Mastodon API for client-to-server communications. A very small minority support both AP C2S and Mastodon’s API but it’s nearly a lost cause at this point; Mastodon’s API is the de-facto standard. If you want good client support, it’s the only way.
“Become an expert in iOS, Android, Electron, native Windows apps, etc so you can add C2S support to the existing apps” isn’t really feasible for most people. Technically it is “just more work” but it’s unrealistic.
I unfortunately chose .io a number of years ago before I knew better, and because I could actually get a decent name there.
These days it seems like there are a billion TLDs and most aren’t very recognizable by the average user, other than really obscure or long domain names, and many still have similar baggage to .io
. Does anyone here have a recommendation on where to look for good domain names? Sometimes it seems like all of them are already taken.
i’m on .me (vgel.me), Montenegro’s cctld run by doMEn which I assume is a single purpose company, and haven’t had any complaints. i mean i’m sure they’re eating babies in the corporate offices or something, but the email delivery seems alright, i was able to get a 4-letter domain without dropping $200k, and they’re not overtly horrid as far as i know.
In a former life, I used to work for a registrar and built their domain management platform. That meant I dealt with the .me registry - the staff there were all lovely to deal with!
For non-email use, I honestly just look at whatever namecheap has on sale at the moment. I’ve had good luck with .fyi, .site and .us for a few PoC-y things lately.
Are you allowed to have anonymous whois on a .us
domain? Back when I had one, you needed to provide your real name and address to the record for “anti-terrorism” reasons or something silly like that. I even wrote to my (then) Senator about it because I thought it was dumb… I ended up moving my personal domain from .us
to .net
over it.
No. And that just bit me once again. For the stuff I use it for (tech demos, etc.) I don’t really care. But I’m so used to my registrar’s generous private whois service that I didn’t notice the absence of that checkbox when I bought my most recent one.
Then my phone started ringing with “scam risk” numbers wanting to sell me offshore site development services. I’ve set up a free google voice number with screening just to list in whois, now.
None of them seem to bother with direct mail because it’s too costly.
.io isn’t going anywhere any time soon. .su still exists, as do practically all of the other ccTLDs that got any use.
I’d love the Chagossians to get all the money they’re rightfully owed.
It would be good in general if people were more aware of the political considerations of choosing a TLD and the dangers they might pose to a registration.
I’ve seen people using .dev and .app a lot, it’s worth considering these are Google-controlled TLDs. What really rubbed me the wrong way about these TLDs is Google’s decision to make HSTS mandatory for the entire TLD, forcing HTTPS for any website using them. I’m sure some people will consider this a feature but for Google to arbitrarily impose this policy on an entire TLD felt off to me. No telling what they’ll do in the future.
.app and .dev aren’t comparable to ccTLDs like .sh and .up, however. gTLDs like .app and .dev have to stick to ICANN policies; ccTLDs don’t, and you’re at the mercy of the registry and national law for the country in question.
I was actually just discussing this fact with someone, but interestingly, we were discussing it as a positive, not a negative.
All of the newTLDs are under ICANN’s dominion, and have to play by ICANN’s rules, so they don’t provide independence from ICANN’s influence. Whereas the CCTLDs are essentially unconditional handouts which ICANN can’t exert influence over. So there’s a tradeoff here depending on whom you distrust more: ICANN, or the specific country whose TLD you’ve chosen.
HSTS preload for the entire TLD is brilliant idea, and I think every TLD going forward should have it.
Defaulting to insecure HTTP URLs is a legacy problem that creates a hole in web’s security (it doesn’t matter whats on insecure-HTTP sites, their mere existence is an entry point for MITM attacks against browser traffic). TOFU HSTS is only a partial band-aid, and per-domain preload list is not scalable.
The Trust-On-First-Use aspect is that HSTS is remembered by the browser only after the browser has loaded the site once; this leaves first-time visitors willing to connect over unencrypted HTTP.
(Well, except for the per-domain preload list mentioned by kornel.)
Sure, but HSTS is strictly a hint that HTTPS is supported, and browsers should use that instead, right? There is no actual trust there, because the TLS certificate is still authenticated as normal.
Compare this to SSH, which actually is TOFU in most cases.
Not quite - HSTS prevents connection over plaintext HTTP and prevents users from creating exceptions to ignore invalid certificates. It does more than be a hint, it changes how the browser works for that domain going forward. The TOFU part is that it won’t apply to a user’s first connection - they could still connect over plaintext HTTP, which means that a suitably positioned attacker could respond on the server’s behalf with messages that don’t include the HSTS header (if the attacker is fast enough). This works even if the site itself isn’t serving anything over HTTP or redirects immediately to HTTPS.
Calling it TOFU is admittedly a bit of a semantic stretch as I’m not sure what the specific act of trust is (arguably HSTS tells your browser to be less trustful), but the security properties are similar in that it only has the desired effect if the initial connection is trustworthy.
Okay, I see the point about first-time connections, but that wouldn’t change regardless of the presence or absence of HSTS. So why single that header out? It seems to me that having HSTS is strictly better than not having one.
The discussion was about HSTS preload which avoids the first connection problem just explained by pre-populating HSTS enforcement settings for specific domains directly in the browser distribution, so there is no risk of that first connection hijack scenario because the browser acts as if it had already received the header even if it had never actually connected before.
Normally this is something you would opt-in to and request for your own domain after you registered it, if desired… but Google preloaded HSTS for the entire TLDs in question, so you don’t have the option to make the decision yourself. If you register a domain under that TLD then Chrome will effectively refuse to ever connect via http to anything under that domain (and to my knowledge every other major browser uses the preload list from Chrome.)
It’s this lack of choice that has some people upset, though it seems somewhat overblown, as Google was always very upfront that this was a requirement, so it shouldn’t have been a surprise to anyone. There is also some real concern that there’s a conflict of interest in Google’s being effectively in total control of both the TLDs and the preload list for all browsers.
The discussion was about HSTS preload which avoids the first connection problem just explained by pre-populating HSTS enforcement settings for specific domains directly in the browser distribution, so there is no risk of that first connection hijack scenario because the browser acts as if it had already received the header even if it had never actually connected before
Ahh, THIS is the context I was missing here. In which case, @kornel’s original comment about this being a non-scalable bandaid solution is correct IMO. It’s a useful mitigation, but probably only Google could realistically do it like this.
I think the more annoying thing about .dev is that a bunch of local development dns systems like puma-dev
and pow
used .dev
and then Google took it away and made us all change our dev environments.
I think the more annoying thing about .dev is that a bunch of local development dns systems like puma-dev and pow used .dev and then Google took it away and made us all change our dev environments.
That seems unfortunate, but a not terribly surprising consequence of ignoring the names that were specifically reserved for this purpose and making up their own thing instead.
I mean user typing “site.example.com” URL in their browser’s address bar. If the URL isn’t in the HSTS preload list, then it is assumed to be HTTP URL and HTTPS upgrade is like TOFU (the first use is vulnerable to HTTPS-stripping). There are also plenty of http://
links on the web that haven’t been changed to https://
, because HTTP->HTTPS redirects keep them working seamlessly, but they’re also a weak link if not HSTS-ed.
uh! I chose .app (unaware, stupid me) for a software project that discarded the Go toolchain for this very reason. Have to reconsider, thx!
I have no idea where to even start to research this stuff. I use .dev in my websites but I didn’t know it was controlled by Google. I legitimately thought these all are controlled by some central entity.
I have no idea where to even start to research this stuff.
It is not really that hard. You can start with https://en.wikipedia.org/wiki/.dev
If you are going to rent a property (domain names) for your www home and if you are going to let your content live in that home for many years it pays off to research this stuff about where you are renting the property from.
I write Go at work, and we’re moving to codegen-ing quite a bit of our everyday stuff. Now, I wouldn’t call myself a Go fan, in fact I’m fairly frustrated by it on the regular. But, there is something to be said about a very simple semantic core and leaning on codegen to handle the expressivity. I just wish Go had macros built in, so you didn’t need to pick your codegen tool and it would all be AST-based instead of string-based.
I’m of the opinion recently that we’re at a local maximum with PL syntax expressivity. I do not see what could be done to really change things all that much. I’ve used all of the state of the art type systems - Scala, F*, OCaml, Haskell, Idris, you name it. None of them affect the raw amount of code that you have to write all that much. Probably the only thing that affects it is not having types at all, and I don’t even think that is all that expressive across the whole system. There is clear essential complexity with the level of logic that we’re writing.
So I’m very open to codegen and macros recently. Sure, they can consist of total black magic and be hard to debug and fully understand. But, I don’t think we have an alternative. There’s an upper limit on how much code a human can produce in a given timespan, and more importantly there’s a limit on the surface area of how much a human can understand enough to successfully modify code correctly. There’s some clear information theoretic limits at play, and no I don’t think that we’re one beautiful PL feature away from getting past those limits in a meaningful way, and yes I’m proposing that the answer is macros and / or codegen to get around it
A thing to keep in mind is that none of the languages you’ve mentioned are trying to make it so that you write less code, but to ensure certain classes of errors are caught sooner and so that code is more likely to be “obviously correct”. There’s a limit to how much code someone can produce, but you can increase how much of their time isn’t wasted on silly things that the compiler can/should catch.
Codegen does help, as do macros, but only when they’re hygienic. Generics help here too, as they’re a form of typesafe codegen.
That’s true, I didn’t mean to focus on type systems exactly, but meant that these are the languages with the most advanced features overall which should translate to programmer productivity in some way. I fully agree that “productivity” has two aspects, raw surface area but also manageability of that surface area. Types / advanced PL features help with the manageability, but the surface area magnitude is what i’m most concerned about now.
I’ve used all of the state of the art type systems - Scala, F*, OCaml, Haskell, Idris, you name it. None of them affect the raw amount of code that you have to write all that much.
I don’t completely disagree with your larger point, but I think you’re overstating the case with respect to the amount of code.
For example, even with the same language, it is not uncommon to see a 2-3x difference in the amount of code needed depending on who writes it. And I’m not just talking about golf tricks – I mean the difference between two fully formed solutions whose goal was readability and correctness.
On top of that, while OCaml and Haskell, say, might be close in expressiveness, there is a substantial average reduction in code between Go and Haskell, say. At least 2x, and it can be greater. Whether this translates to less time spent overall is a separate question, but there is no question that some languages are substantially more concise than others.
I have no quantitative info about this, this is definitely just my current intuition, and I’d actually really like to get some more quantitative data here. So I can’t disagree with you, because I have nothing to base it on other than feelings.
My feelings, though, are that of course you’re right, but I don’t think that means what I’m saying was overstated. You can get marginal improvements by “just writing the code better” and “choosing a more expressive language,” but I still don’t think it’s good enough. We need a much higher level of abstraction.
A rock doesn’t do computation when provided with energy. Now the question is, “what is computation”.
Almost exactly, except that matter != energy, but you can convert matter into energy. I’ll go a bit further though: the rock is not a computer, and nor are any of the basic components of a computer themselves a computer. Doping some silicon doesn’t give you a computer, but a way to direct one energy flow and in the case of transistors, based on another. What actually makes them a computer is their arrangement, and it’s that arrangement, mediated by the components, that does the computation by directing some form of energy in some manner.
Exceptions are values in Python. I’m not sure where the notion that they’re not could come from. This is an issue is flow control. Go has one kind of escape continuation, which is triggered by the
return
statement, where as Python has two: the one triggered byreturn
and the one triggered byraise
. However, both of these handle values.I think what it’s traditionally intended by “errors are values in Go” is related to the way they are being handled, not produced. In languages where error escaping is done with exceptions, in this case Python, they are usually handled by type, not by value.
When you use the likes of
fmt.Errorf()
you’re minting new objects, just as you do with calling an exception type’s constructor. The difference is that you have basic pattern matching (because that’s what an exception handler does) on the type, allowing you to discriminate between them, which you can’t do with the likes offmt.Errorf()
.OK. I’m not sure if you disagree with me, I just tried to explain how I understood the “errors are values” concept in Go.
What about panic?
A panic is more akin to a Unix signal.
panic
unwinds the stack until arecover
— this is not the same as a signal.“Errors are just values” links to a blog post that explains what that means. https://go.dev/blog/errors-are-values
I know. What I disagree with is the ‘Unlike Python’ bit.
I think you’re looking to closely at specifics, rather than the gist of it: errors are handled using the same language features as other values, unlike exceptions which are handled using dedicated language constructs. Python doesn’t do
return Exception
even if it could.This is not accurate. Go tuples are an error-specific language feature in the same as way Python’s
except
. You can’t use tuples as a general construct.Go doesn’t have tuples, but it does have multiple return values. Often the last return value is an error, but that’s not a language feature or requirement or anything, it’s just a convention. So I think the OP is accurate.
There’s a strong convention to use it like that, but Go lets you use it with any type you want. You could use it to return e.g. (x,y) coordinates if you wanted.
Go dropped the ball on having special-case multiple-value return instead of just single-value return with tuples and destructuring, but having “simple” non-generalizable features is sort of their thing.
But even when used with errors,
if err != nil
is definitely spiritually closer toif (ret != -1)
thantry
/catch
.Go doesn’t have first class tuples, but its return tuples are not specific to error handling. You can return two ints; or an int, a bool, and a float; or whatever else.
Sure, by its nature this is true, because it’s a tuple. But with non-error types you have multiple ways to return them, whereas errors are always returned using the tuple special-case. Tuples exist to return errors.
I’m not sure what you’re saying. As a convention, people return errors as the last type in a tuple, but it’s just a convention. You can return them through global values (like C errno) or out value pointers instead if you wanted to. I have a couple of helper functions that take error pointers and add context to the thing they point at. And people return other things in tuples, like bools or pairs of ints. It’s a historical question whether multiple return was intended for errors, but I do know that before Go 1.0, the error type was a regular type in the os package and it was only promoted to a built in when Roger Peppe (who doesn’t work at Google AFAIK) proposed doing so.
Out of curiosity, I dug up the original public introduction from 2009. To my surprise, Pike made it entirely through that presentation without ever mentioning error-handling that I can find.
So, I hit the wayback machine. The very first introduction of function tuples on the website uses error returns as the use-case. This slide deck from Pike’s Go course also introduces multiple returns with error handling.
I don’t think it’s fair to say this is just a convention, it is how the designer of the language chose to introduce the feature to people for the first time.
The distinction worth making here is not that “errors are values”, that is uninteresting. It’s true in Python. The distinction is non-local returns vs multivariate functions.
You’re describing multiple return values as “function tuples”. I don’t think this is really accurate, as those return values are always discrete. Go doesn’t really have a concept of a tuple.
The thing that “errors are values” tries to communicate isn’t any detail about the specific implementation of the error type, but rather that errors are not fundamentally different than other types like ints or structs or whatever, and that error handling can and should use the same language constructs as normal programming.
In Python, exceptions are values, but they’re not just values; they’re values that interact with the exception-handling mechanism, which does things to and with them (unlike
return
, which doesn’t care what kind of value you give it, and doesn’t modify that value).