1. 6

    A slightly related Go nit, the case of structure members determines whether they’re exported or not. It’s crazy, why not explicitly add a private keyword or something?

    1. 17

      why not explicitly add a private keyword or something?

      Because capitalization does the same thing with less ceremony. It’s not crazy. It’s just a design decision.

      1. 4

        And limiting variable names to just “x”, “y” and “z” are also simpler and much less ceremony than typing out full variable names

        1. 1

          I’m not sure how this relates. Is your claim that the loss of semantic information that comes with terse identifiers is comparable to the difference between type Foo struct and e.g. type public foo struct?

          1. 1

            That is actually a Go convention, too. Two-letter or three-letter variable names like cs instead of customerService.

        2. 6

          This would be a more substantive comment chain if you can express why it’s crazy, not just calling it crazy. Why is it important that it should be a private keyword “or something”? In Go, the “or something” is literally the case sensitive member name…which is an explicit way of expressing whether it’s exported or not. How much more explicit can you get than a phenotypical designation? You can look at the member name and know then and there whether it’s exported. An implicit export would require the reader to look at the member name and at least one other source to figure out if it’s exported.

          1. 7

            It’s bad because changing the visibility of a member requires renaming it, which requires finding and updating every caller. This is an annoying manual task if your editor doesn’t do automatic refactoring, and it pollutes patches with many tiny one-character diffs.

            It reminds me of old versions of Fortran where variables that started with I, J, K L or M were automatically integers and the rest were real. 🙄

            1. 4

              M-x lsp-rename

              I don’t think of those changes as patch pollution — I think of them as opportunities to see where something formerly private is now exposed. E.g. when a var was unexported I knew that my package controlled it, but if I export it now it is mutable outside my control — it is good to see that in the diff.

              1. 2

                I guess I don’t consider changing the capitalization of a letter as renaming the variable

                1. 1

                  That’s not the point. The point is you have to edit every place that variable/function appears in the source.

                  1. 1

                    I was going to suggest that gofmt‘s pattern rewriting would help here but it seems you can’t limit it to a type (although gofmt -r 'oldname -> Oldname' works if the fieldname is unique enough.) Then I was going to suggest gorename which can limit to struct fields but apparently hasn’t been updated to work with modules. Apparently gopls is the new hotness but testing that, despite the “it’ll rename throughout a package”, when I tested it, specifying main.go:9:9 Oldname only fixed it (correctly!) in main.go, not the other files in the main package.

                    In summary, this is all a bit of a mess from the Go camp.

              2. 4

                The author of the submitted article wrote a sequel article, Go’ing Insane Part Two: Partial Privacy. It includes a section Privacy via Capitalisation that details what they find frustrating about the feature.

              3. 4

                A slightly related not-Go nit, the private keyword determines whether struct fields are exported or not. It’s crazy, why not just use the case of the field names saving everyone some keypresses?

                1. 2

                  I really appreciate it, and find myself missing it on every other language. To be honest, I have difficulty understanding why folding would want anything else.

                  1. 2

                    On the contrary, I rather like that it’s obvious in all cases whether something is exported or not without having to find the actual definition.

                  1. 20

                    (context: I’ve used Go in production for about a year, and am neither a lover nor hater of the language, though I began as a hater.)

                    With that said, my take on the article is:

                    1. The “order dependence” problem is a non-problem. It doesn’t come up that often, dealing with it easy – this is simply low-priority stuff. If I wanted to mention it, it would be as an ergonomic nitpick.
                    2. The infamous Go error handling bloat, while annoying to look at, has the great benefit of honesty and explicitness: You have to document your errors as part of your interface, and you have to explicitly deal with any error-producing code you use. Despite personally really caring about aesthetics and hygiene – and disliking the bloat like the author – I’ll still take this tradeoff. I also work in ruby, and while raising errors allows you to avoid this boilerplate, it also introduces a hidden, implicit part of your interface, which is worse.

                    It’s also worth pointing out Rob Pike’s Errors are Values which offers advice for mitigating this kind of boilerplate in some situations.

                    1. 22

                      There’s a difference between explicitness and pointless tediousness.

                      Go’s error handling is more structured compared to error handling in C, and more explicit and controllable compared to unchecked exceptions in C++ and similar languages. But that’s a very low bar now.

                      Once you get a taste of error handling via sum types (generic enums with values), you can see you can have the cake and eat it too. You can have very explicit error documentation (via types), errors as values, and locally explicit control flow without burdensome syntax (via the ? syntax sugar).

                      1. 4

                        I agree.

                        But Go, eg, is not Haskell, and that’s an explicit language design decision. I think Haskell is a more beautiful language than Go, but Go has its reasons for not wanting to go that direction – Go values simple verbosity over more abstract elegance.

                        1. 14

                          If it’s Go’s decision then ¯\_(ツ)_/¯

                          but I’ve struggled with its error handling in many ways. From annoyances where commenting out one line requires changing = to := on another, silly errors due to juggling err and err2, to app leaking temp files badly due to lack of some robust “never forget to clean up after error” feature (defer needs to be repeated in every function, there isn’t errdefer even, and there’s no RAII or deterministic destruction).

                          1. 4

                            Sounds like you’re fighting the language 🤷

                            1. 2

                              commenting out one line requires changing = to := on another

                              I do not agree that this is a problem. := is an explicit and clear declaration that helps the programmer to see in which scope the variable is defined and to highlight clear boundaries between old and new declarations for a given variable name. Being forced to think about this during refactoring is a good thing.

                              1.  

                                Explicit binding definition by itself is good, but when it’s involved in error propagation it becomes a pointless chore.

                                That’s because variable (re)definition is not the point of error handling, it’s only self-inflicted requirement go made for itself.

                                1.  

                                  Go takes the stance that error propagation is not different than any other value propagation. You don’t have to agree that it’s a good decision, but if you internalize the notion that errors are not special and don’t get special consideration, things fall into place.

                              2. 2

                                there isn’t errdefer even

                                I mean, it’s a pretty trivial helper func if you want it:

                                func Errdefer(errp *error, f func()) {
                                    if (*err) != nil {
                                        f()
                                    }
                                }
                                
                                func whatever() (err error) {
                                    defer Errdefer(&err, func() {
                                       // cleanup
                                    })
                                    // ...
                                }
                                

                                In general, to have fun in Go, you have to have a high tolerance for figuring out what 3 line helper funcs would make your life easier and then just writing them. If you get into it, it’s the fun part of writing Go, but if you’re not into it, you’re going to be “why do I have to write my own flatmap!!” every fourth function.

                                1. 1

                                  commenting out one line requires changing = to := on another

                                  IMHO := (outside of if, for & switch) was a mistake; I prefer a C-style var block at the top of my function.

                                  silly errors due to juggling err and err2

                                  I think that this is mostly avoidable.

                                2. 5

                                  Yup, Go (well, presumably Rob Pike) made a lot of explicit design decisions like this, which drove me away from the language after a year or two and many thousands of LOC written.

                                  Beside the awfulness of error handling, other big ones were the inane way you have to rename a variable/function just to change its visibility, the lack of real inheritance, and the NIH attitude to platform ABIs that makes Go a mess to integrate with other languages. The condescending attitude of the Go team on mailing lists didn’t help either.

                                  1. 3

                                    There is no value in verbosity, though. It’s a waste of characters. The entire attitude is an apology for the bare fact that Go doesn’t have error-handling syntax.

                                    1. 8

                                      What you label “verbosity” I see as “explicitness”. What to you is a lack of error-handling syntax is to me a simplification that normalizes execution paths.

                                      It’s very clear to me that the people who dislike Go’s approach to error handling see errors as a first-class concept in language design, which deserves special accommodation at the language level. I get that. I understand the position and perspective. But this isn’t an objective truth, or something that is strictly correct. It’s just a perspective, a model, which has both costs and benefits. This much at least is hopefully noncontroversial. And Go makes the claim that, for the contexts which it targets, this model of error handling has more costs than benefits. If you want to object to that position then that’s fine. But it’s — bluntly — incorrect to claim that this is some kind of apologia, or that Go is missing things that it should objectively have.

                                      1. 1

                                        It often feels to me that people who complain about error handling in Go have never suffered dealing with throwing and catching exceptions in a huge codebase. At least in Go, you can be very explicit on how to handle errors (in particular, non-fatal ones) without the program trying to catapult you out of an escape hatch. Error handing is tedious in general, in any language. I don’t think Go’s approach is really any more tedious than anywhere else.

                                        1. 2

                                          Error handing is tedious in general, in any language. I don’t think Go’s approach is really any more tedious than anywhere else.

                                          Yep, and a bit more — it brings the “tedium” forward, which is for sure a short term cost. But that cost creates disproportionate long-term benefits, as the explicitness reduces risk otherwise created by conveniences.

                                      2. 2

                                        The argument isn’t that verbosity has a value in itself – it doesn’t.

                                        The argument is that if you have to choose between “simple, but concrete and verbose” and “more complex and abstract, but elegant”, it’s better to choose the former. It’s a statement about relative values. And you see it everywhere in Go. Think about the generics arguments:

                                        People: “WTF! I have to rewrite my function for every fucking datatype!”.
                                        Go: “What’s the big deal? It’s just some repeated code. Better than us bloating the language and making Go syntax more complex”

                                        They caved on that one eventually, but the argument is still germane.

                                        As I said, I don’t personally like all the decisions, and it’s not my favorite language, but once I got where they were coming from, I stopped hating it. The ethos has value.

                                        It all stems from taking a hard line against over-engineering. The whole language is geared toward that. No inheritance. You don’t even get map! “Just use a for loop.” You only see the payoff of the philosophy in a large team setting, where you have many devs of varying experience levels working over years on something. The “Go way” isn’t so crazy there.

                                  2. 3

                                    Java included exceptions in the function signature and everyone hated those, even Kotlin made them optional. Just like how this Python developer has grown to enjoy types, I also enjoy the explicit throws declarations.

                                    1. 3

                                      you have to explicitly deal with any error-producing code you use.

                                      Except if you forget to deal with it, forget to check for the error, or just drop it.

                                    1. 11

                                      These aren’t for comments, but rather for replies on a review thread. I think it is unwise to overload the term ‘comment’ in computing.

                                      For code comments, I have been using https://www.python.org/dev/peps/pep-0350/ for this for a long time, and recommend it to others.

                                      For review responses, I suppose this looks decent enough, although the use of bold assumes styled text; I would prefer all-caps, as has been conventional in unstyled text for quite awhile. When styles are available, bold and all-caps is quite visually distinct.

                                      1. 4

                                        These aren’t for comments, but rather for replies on a review thread. I think it is unwise to overload the term ‘comment’ in computing.

                                        These are comments, readers are supposed to understand that we’re talking about something different from code comments by context. This is absolutely not an unreasonable expectation. Both my kids have understood contextual words without being taught. Context really is intuitive to human nature and it’s perfectly reasonable to use the same word in different contexts to mean something different.

                                        1. 4

                                          “Reviews” is the standard word for this.

                                          1. 1

                                            The only real context here outside of TFA itself is computing, or maybe slightly more broadly, technology. All we see on this site, which is generally full of programming minutia, is “comments”, in both the title and domain name. The use of the word “conventional” only makes it worse: conventions in code comments are an almost universally recognized and common thing, conventions in reviews, not so much. One might even argue that this is nearing territory considered off-topic on this site (being not particularly technical).

                                            I’d be low-key surprised if anyone here assumed differently. This is actually my second click through to this article, because although I read the whole thing the first time, it didn’t even occur to me that this link was to that article, and not something on code commenting practices or whatever that I missed before.

                                            Sure, anyone who actually reads the whole thing and comes away confused… well, has bigger problems… but it’s still a poor choice of words. Maybe this is a superficial bikeshed, but that sort of thing is pretty important when the whole point is to define a soft standard for things with a standard name. Even in the context this is specifically intended for (code review), I’d assume that “conventional comments” was something about the code (did I get the Doxygen tags wrong or something?), because of course I would. That’s what a code review is.

                                        1. 2

                                          Like this post from Paul Graham, I don’t think the article here is wrong so much as out of date. Even though Lisp had a lot of this stuff early, there’s nothing unique in Common Lisp, and for the most part even the combination of features isn’t unique.

                                          python -m pdb does everything this describes (it doesn’t have recursive restarts, not discussed here). I’d go so far as to say Python has everything Common Lisp has except for procedural, syntactic metaprogramming. Every other feature exists in multiple other non-lisp languages.

                                          http://paulgraham.com/weird.html

                                          1. 4

                                            Ruby has an edge in this area. In particular Ruby has good support for making domain specific languages, which still have the full power of Ruby available. It’s clunky and not as easy as lisp, but it can be used to great effect (see rspec).

                                            On the other hand, Ruby’s implement of meta programming is a massive landmine to new Rubyists who haven’t learned the painful lesson of using it in moderation.

                                            1. 3

                                              I’d go so far as to say Python has everything Common Lisp has except for procedural, syntactic metaprogramming.

                                              Does modern Python have multimethods? Lisp enables one to specialise methods on more than their first argument. Does it have update-instance-for-redefined-class? Lisp enables the programmer to update existing instances when a class is changed (I remember this biting me a lot back when I programmed in Python). Does it permit one to change the class of an object?

                                              Maybe it does — it’s been awhile since I programmed professionally in Python. But I suspect that the capabilities above are still pretty rare.

                                              1. 1

                                                Incidentally what do you use class changing for? I’ve only ever found it useful in deserialization code.

                                                1. 2

                                                  It is handy when one realises that an object needs to be changed, without wanting to have to track down every single reference to it. The most common instance when I change an object’s class is when I want to make it into and instance of a super-, sub- or adjacent class.

                                                  It is unlikely I would ever need to change the class of an object from, say, WEB-SERVER to USER, but it is more likely than I might change one from WEB-SERVER to TLS-WEB-SERVER.

                                                  (I assume you mean changing the class of instances, not updating instances for a new class definition)

                                                  1. 1

                                                    That makes some sense. Thank you!

                                                    In other languages we would use some kind of interface or hook based pattern to handle that kind of change.

                                                    1. 1

                                                      That’s what Lisp does: the hook is the UPDATE-INSTANCE-FOR-DIFFERENT-CLASS generic function (called by CHANGE-CLASS).

                                                      As a user, you define a method on it, specialised on the old and the new classes, and it will automatically be called. It is that simple!

                                                      This sort of thing is really useful for long-running, interactive systems. Not so useful for scripts which run and then are done.

                                                2. 1

                                                  Multi methods? No. Change class - possible in 2.7 iirc. Patching the class will automatically cascade to instances where there isn’t a shadowing instance variable. If you need more you could easily set a meta class to provide that.

                                                  1. 1

                                                    Multi methods? No. Change class - possible in 2.7 iirc. Patching the class will automatically cascade to instances where there isn’t a shadowing instance variable. If you need more you could easily set a meta class to provide that.

                                                1. 17

                                                  Emacs in Guile.

                                                  I know there was the work in ~2014, but being able to run things that were multi-threaded, have a much simpler language, and support the guile ecosystem where emacs libraries would just be guile libraries makes my heart happy.

                                                  Doesn’t make a lot of sense to actually do, and I understand the limitations.. but whoooo boy have I been lusting for this for years and years.

                                                  1. 4

                                                    Why Guile instead of Common Lisp? Elisp is far closer to Lisp than to Scheme; Lisp has multiple compatible implementations while Guile (and every other useful Scheme) is practically incompatible with any other useful Scheme, because the RnRS series, even R6RS, tend to underspecify.

                                                    GNU’s focus on Scheme rather than Common Lisp for the last 20 years has badly held it back. Scheme is great for teaching and implementation: it is simple and clean and pure. But it is to small, and consequently different implementations have to come up with their own incompatible ways of doing things.

                                                    While the Common Lisp standard is not as large as I would like in 2021, it was considered huge when it came out. Code from one implementation is compatible with that of others. There are well-defined places to isolate implementation-specific code, and compatibility layers for, e.g., sockets and threading, exist.

                                                    1. 2

                                                      I’d also love to see a Common Lisp Emacs. One day, I’m hoping that CLOS-OS will be usable, and using it without Emacs is kind of unthinkable.

                                                      1. 1

                                                        Why Guile instead of Common Lisp?

                                                        Because it’s my favorite flavor of scheme, and I enjoy programming in scheme much more than common lisp. I do not like lisp-2’s, but I do like defmacro (gasp) so who knows.

                                                        Emacs being rewritten in common lisp would also be awesome.

                                                      2. 1

                                                        I’m going to go more heretical: emacs with the core in Rust and Typescript as the extension language. More tooling support, more libraries, more effort into optimizing the VM.

                                                        Lisps are alright but honestly I just don’t enjoy them. That maxim about how you have to be twice as clever to debug code as you do to write it, so any code that’s as clever as you can make it is undebuggable.

                                                        1. 1

                                                          this exists doesn’t it?

                                                          1. 1

                                                            The last implementation work was ~2015 according to git.

                                                            If you know of this existing, let me know!

                                                          2. 1

                                                            Multiple times I’ve considered, and then abandoned, taking microemacs and embedding a Scheme in it. The reality is that it’s basically a complete rewrite, which just doesn’t seem worth it…. And you lose all compatibility with the code I use today.

                                                            Guile-emacs, though, having multiple language support, seems to have a fighting chance at a successful future, if only it was “staffed” sustainably.

                                                            1. 1

                                                              Multiple times I’ve considered, and then abandoned, taking microemacs and embedding a Scheme in it.

                                                              Isn’t that basically edwin?

                                                              1. 1

                                                                I mean, by the reductivist’s view, yes. Edwin serves a single purpose of editing scheme code with an integrated REPL. You could, of course, use it beyond that, but that’s not really the goal of it, so practically no one does (I am sure there are some edwin purists out there).

                                                                My interest in this as a project is more as a lean, start from scratch standpoint. I wonder what concepts I’d bring from emacs over. I wonder if I’d get used to using something like tig, instead of magit. I wonder if the lack of syntax highlighting would actually be a problem… The reason I’ve never made a dent in this is because I don’t view reflection of how I use things like this as deeply important. They’re tools…

                                                          1. 1

                                                            I wonder if Tcl literals might not be a bit cleaner, and shell-friendly, than JSON.

                                                            We have to be really careful not to get stuck in a local maximum. JSON is indeed better than XML and ASN.1, but that is a really low bar.

                                                            1. 1

                                                              JSON is not very much like a replacement for ASN.1. It could be used as an encoding substrate for ASN.1 if anyone cared to. The ITU has even specified how to do that in the JER, X.697. This document, from one of the major compiler vendors explains it in a more accessible way and is easier to obtain than the ITU standard.

                                                              Using JSON encoding rules for ASN.1 is the moral equivalent of using ASN.1 as a language to specify a schema for JSON, and lets you use ASN.1 tools for validating compliance. So, e.g., you can specify that a particular field in a structure must contain an enumerated type in a standard, and validate that a particular json blob conforms to that expectation before feeding it to your application logic.

                                                              The main benefit of using JER that way is that at least the two largest ASN.1 compilers and runtimes support it, and that gets you quite a bit of battle tested tooling. That said, if you have that tooling at hand, you’re probably already well-equipped to handle the more compact binary encodings. And people who like JSON but don’t have those tools in their box probably oppose having a schema enforced in any way like this anyway.

                                                              1. 2

                                                                [JSON] could be used as an encoding substrate for ASN.1 if anyone cared to.

                                                                Yes, I know. IIRC there are rules for encoding ASN.1 in XML too. But I think you know what I meant: JSON the human-readable, schemaless notation is indeed better than XML the incredibly verbose markup language and ASN.1 the extremely intricate IDL and its related encoding rules. Sure, they are all subtly different things, but ‘JSON,’ ‘XML’ and ‘ASN.1’ serve as symbols to refer to three different, competing ways of transferring data.

                                                                None of them really works at a shell level, I think. I think Tcl notation would be a real improvement. Here is my rendering of the article’s example:

                                                                $ ip -j addr show dev ens33
                                                                {
                                                                    {addr-info {{} {}}}
                                                                    {
                                                                        ifindex 2
                                                                	ifname ens33
                                                                	flags {BROADCAST MULTICAST UP LOWER-UP}
                                                                	mtu 1500
                                                                	qdisc fq-codel
                                                                	operstate UP
                                                                	group default
                                                                	txqlen 1000
                                                                	link-type ether
                                                                	address 00:0c:29:99:45:17
                                                                	broadcast ff:ff:ff:ff:ff:ff
                                                                	addr-info {
                                                                	    {
                                                                                family inet
                                                                		local 192.168.71.131
                                                                		prefixlen 24
                                                                		broadcast 192.168.71.255
                                                                		scope global
                                                                		dynamic true
                                                                		label ens33
                                                                		valid-life-time 1732
                                                                		preferred-life-time 1732
                                                                	    }
                                                                	    {
                                                                                family inet6
                                                                		local fe80::20c:29ff:fe99:4517
                                                                		prefixlen 64
                                                                		scope link
                                                                		valid-life-time 4294967295
                                                                		preferred-life-time 4294967295
                                                                            }
                                                                        }
                                                                    }
                                                                }
                                                                

                                                                I think that compares very favourably. And it is pretty easy to parse, too, cf. https://wiki.tcl-lang.org/page/Dodekalogue

                                                                1. 1

                                                                  Yes, I know. IIRC there are rules for encoding ASN.1 in XML too.

                                                                  Those XML encoding rules are called XER, for any who aren’t familiar and want to reference. They well pre-date JER and are similar in spirit.

                                                                  I did not accurately suss out whether you were arguing schemaless > schema or just lobbying for a particular wire format. ‘Cause JSON schemas are a thing almost as much as XML schemas, and IMO if you want to go there the ASN.1 schema-related tooling is dramatically better (though the good stuff isn’t free) anyway.

                                                                  If you want to roll without a schema, the TCL notation seems a little more human readable than JSON. But I’d argue that if you were going to touch all of these tools to do a radical new output format, you might as well go the whole 9 yards and introduce automateable format validation, too. And if you’re doing that, IMO the ASN.1 notion of a schema is easier to both read and write than either json-schema, xml schema definition, or xml document type definition in addition to having better enforcement tools.

                                                            1. 17

                                                              Genuinely I don’t understand the point of this article.

                                                              I would pick even gnome or kde over windows’s awful GUI (really any of the recent ones, but certainly windows 10) even if I use i3. Using windows is just… annoying… frustrating… painful… I have top a of the line laptop from dell with an nvidia iGPU, 32GiB of RAM and a top of the line (at the time) intel mobile class CPU. But the machine still finds a reason to bluescreen, randomly shut-down without safely powering down my VMs, break or god knows what all the time. And when such a thing happens there’s no options to debug it, there’s no good documentation, no idea of where to even start. I’m glad windows works for some people, but it doesn’t work for me. What wakeup call? What do I need to wake up to? I use linux among other things, it’s not perfect but for me it’s the best option.

                                                              1. 10

                                                                (NB: I’m the author of the article, although not the one who submitted it)

                                                                Genuinely I don’t understand the point of this article.

                                                                The fact that it’s tagged “rant” should sort of give it away :P. (I.e. it’s entirely pointless!)

                                                                There is a bit of context to it that is probably missing, besides the part that @crazyloglad pointed out here. There is a remarkable degree of turnover among Linux users – nowadays I maybe know 6-7 people who use Linux or a BSD full time, but I know dozens who don’t use it anymore.

                                                                And I think one of the reasons for that is the constant software churn in the desktop space. Lots of things, including various GTK/Gnome or KDE components, ritually get tore down, burnt and rebuilt every 6-8 years or so, and at one point you just get perpetual beta fatigue. I’m not sure how else to call it. Much of it, in the last decade, has been in the name of “better” or “more modern” UX, and yet we’re not in a much better position than ten years ago in terms of userbase. Meanwhile, Microsoft swoops in and, on their second attempt, comes up with a pretty convincing Linux desktop, with a small crew and very little original thought around it, just by focusing on things that actually make a difference.

                                                                1. 15

                                                                  I suspect that Microsoft is accidentally the cause of a lot of the problems with the Linux desktop. Mac OS, even back in the days when it didn’t have protected memory and barely qualified as an operating system, had a clear and coherent set of human interface guidelines. Nothing on the system was particularly flashy[1] and so it was hard to really understand the value of this consistency unless you used it for a few months. Small things like the fact that you bring up preferences in every application in exactly the same way (same menu location, same keyboard shortcut), text field navigation with mouse (e.g. selecting whole words) or shortcut keys is exactly the same, button order is consistent in ever dialog box. A lot of apps brought their own widget set, in part because ‘90s Microsoft didn’t want to give away the competitive edge of Office and so didn’t provide things in the system widget set that would have made writing an Office competitor too easy.

                                                                  In contrast, the UI situation on Windows has always been a mess. Most dialog boxes put the buttons the wrong way around[2], but even that isn’t consistent and some put them the right way around. The ones that do get it right just put ‘okay’ and ‘cancel’ on the buttons instead of verbs (for example, on a Mac if you close a window without saving the buttons are ‘delete’, ‘cancel’, ‘save’).

                                                                  Macs are expensive. Most of the people working on *NIX desktop environments come from Windows. If they’ve used a Mac, it’s only for a short period, not long enough to learn the value of a consistent UI[3]. People always copy the systems that they’re familiar with and when you’re trying to copy a system that’s a bit of a mess, it’s really hard to come up with something better. The systems that have tried to copy the Mac UI have typically managed the superficial bits (Aqua themes) and not got any of the parts that actually make the Mac productive to use.

                                                                  [1] When OS X came out, Apple discovered that showing people the Genie animations for minimising in shops increased sales by a measurable amount. Flashiness can get the first sale, but it isn’t the thing that keeps people on the platform. Spinning cubes get old after a week of use.

                                                                  [2] Until the ‘90s, it was believed that this should be a locale-dependent thing. In left-to-right reading order, the button implying go back should be on the left and the one implying go forwards should be on the right. In left-to-right reading order locales, it should be the converse. More recent research has shown that the causation was the wrong way around: left-to-right writing schemes are dominant because humans think left-to-right is forwards motion, people don’t believe left-to-right is forwards because that’s the order that they’re taught to read. Getting this wrong is really glaring now that web browsers are dominant applications because they all have a pair of arrows where <- means ‘go back’ and -> means ‘go forwards’, and yet will still pop up dialogs with the buttons ordered as [proceed] [go back] as if a human might find that intuitive.

                                                                  [3] Apple has also been gradually making their UIs less consistent over the last 10-15 years as the HCI folks (people with a background in cognitive and behavioural psychology) retired and were replaced with UX folks (people who followed fads in what looks shiny and had no science to justify their decisions).

                                                                  1. 13

                                                                    IMHO the fact that, despite how messy it is, the Windows UI is so successful, points out at something that a lot of us don’t really want to admit, namely that consistency just isn’t that important. It’s not pointless, as the original Macintosh very convincingly demonstrated, especially with users who aren’t into computers as a hobby. But it’s not the holy grail, either.

                                                                    Lots of people sneer at CAD apps (or medical apps, I have some experience with that), for example, because their UIs are old and clunky, and they’re happy to ascribe it to the fact that the megacorps behind them just don’t know how to design user interfaces for human users.

                                                                    But if they were, in fact, to make a significant facelift, flat, large buttons, and hamburger menus and all, their existing users, who rely on these apps for 8 hours/day to make those mind-bogglingly complex PCBs and ICs, and who (individually or via their employers) pay those eye-watering licenses, would hate them and would demand for their money back and a downgrade. A facelift that modernized the interface and made it more “intuitive”, “cleaner” and “more discoverable” would be – justifiably! – treated as a (hopefully, but not necessarily) temporary productivity killer that’s entirely uncalled for: they already know how to use it, so there’s no point in making it more intuitive or more discoverable. Plus, these are CAD apps, not TikTok clones. The stakes are higher and you’re not going to rely on guts and interface discoverability, if you’re in doubt, you’re going to read the manual.

                                                                    If you make applications designed to offer a quick distraction, or to hook people up and show them ads or whatever, it is important to get these things right, because it takes just two seconds of frustration for them to close that stupid app and move on – after all it’s not like they get anything out of it. Professional users obviously don’t want bad interfaces, either, but functionality is far more important to get right. If your task for the day is to get characteristic impedance figures for the bus lines on your design, and you have to choose between the ugly app that can do it automatically and the beautiful, distraction-free, modern-looking app that doesn’t, you’re gonna go with the ugly one, because you don’t get paid for staring at a beautiful app. And once you’ve learned how to do it, if the interface gets changed and you have to spend another hour figuring out how to do it, you’re going to hate it, because that’s one hour you spend learning how to do something you already knew how to do, and which is not substantially different than before – in other words, it’s just wasted time.

                                                                    Lots of FOSS applications get this wrong (and I blame ESR and his stupid Aunt Tilly essay for that): they ascribe the success of some competitors to the beautiful UIs, rather than functionality. Then beautiful UIs do get done, sometimes after a long time of hard work and often at the price of tearing down old functionality and ending up with a less capable version, and still nobody wants these things. They’re still a footnote of the computer industry.

                                                                    I’ve also slowly become convinced of something else. Elegant though they may be, grand, over-arching theories of human-computer interactions are just not very useful. The devil is in the details, and accounting for the quirky details of quirky real-life processes often just results in quirky interfaces. Thing is, if you don’t understand the real life process (IC design, neurosurgery procedures, operation scheduling, whatever), you look at the GUIs and you think they’re overcomplicated and intimidating, and you want to make them simpler. If you do understand the process, they actually make a lot of sense, and the simpler interfaces are actually hard to use, because they make you work harder to get all the details right.

                                                                    That’s why academic papers on HCI are such incredible snoozefests to read compared to designer blogs, and so often leave you with questions and doubts. They make reserved, modest claims about limited scenarios, instead of grand, categorical statements about everyone and everything. But they do survive contact with the real world, and since they’re falsifiable, incorrect theories (like localised directionality) get abandoned. Whereas the grand esoteric theories of UX design can quickly weasel their way around counter-examples by claiming all sorts of exceptions or, if all else fails, by simply decreeing that users don’t know what they want, and that if a design isn’t as efficient as it’s supposed to be, they’re just holding it wrong. But because grand theories make for attractive explanations, they catch up more easily.

                                                                    (Edit: for shits and giggles, a few years ago, I did a quick test. Fitts’ Law gets thrown around a lot as a reason for making widgets bigger, because they’re easier to hit. Nevermind that’s not really what Fitts measured 50 years ago – but if you bother to run the numbers, it turns out that a lot of these “easier to hit” UIs actually have worse difficulty figures, because while the targets get bigger, the extra padding from so many targets adds up and travel distances increase enough that the difficulty index is, at best, only marginally improved. I don’t remember what I tried to run numbers on, I think it was some dialogs in the new GTK3 release of Evolution and some KDE apps when the larger Oxygen theme – in some cases they were worse by 15%)

                                                                    Apple has also been gradually making their UIs less consistent over the last 10-15 years as the HCI folks (people with a background in cognitive and behavioural psychology) retired and were replaced with UX folks (people who followed fads in what looks shiny and had no science to justify their decisions).

                                                                    This isn’t limited to Apple, though, it’s been a general regression everywhere, including FOSS. I’m pretty sure you can use Planet Gnome to test hypertension meds at this point, some of the UX posts there are beyond enraging.

                                                                    1. 1

                                                                      AutoCAD did make a significant facelift, cloning the Office 2007 “ribbon” interface, also a significant facelift.

                                                                      1. 1

                                                                        AutoCAD is in a somewhat “privileged” position, in that it has an excellent command interface that most old-time users are using (I haven’t used AutoCAD in years, but back when I did, I barely knew what was in the menus). But even in their case, the update took a while to trickle down, it was not very well received, and they shipped the “classic workspace” option for years along with the ribbon interface (I’m not sure if they still do but I wouldn’t be surprised if they did).

                                                                    2. 4

                                                                      More recent research has shown that the causation was the wrong way around: left-to-right writing schemes are dominant because humans think left-to-right is forwards motion, people don’t believe left-to-right is forwards because that’s the order that they’re taught to read.

                                                                      Do you have a good source for this? Arabic and Hebrew are prominent (and old!) right-to-left languages; it would seem more likely (to me) that a toss of the coin decided which direction a civilization wrote rather than “left-to-right is more natural and a huge chunk of civilization got it backwards.”

                                                                    3. 2

                                                                      There is a remarkable degree of turnover among Linux users – nowadays I maybe know 6-7 people who use Linux or a BSD full time, but I know dozens who don’t use it anymore.

                                                                      I think that chasing the shiny object is to blame for a lot of that. Some times the shiny object really is better (systemd, for all its multitude of flaws, failures, misfeatures and malfeasances really is an improvement on the state of things before), sometimes it might be (Wayland might be worth it, in another decade, maybe), and sometimes it was not, is not and never shall be (here I think of the removal of screensavers from GNOME, of secure password sync from Firefox[0] and of extensions from mobile Firefox).

                                                                      I don’t think it is coincidence that so many folks are using i3, dwm and StumpWM now — they really are better than the desktop environments.

                                                                      But, for what it’s worth, I don’t think I know anyone who used to use Linux or a BSD, and I have been using Linux solely for almost 22 years now.

                                                                      [0] Yes, Firefox still offers password sync, but it is now possible for Mozilla to steal your decryption key by delivering malicious JavaScript on a Firefox Account login. The old protocol really was secure

                                                                      1. 3

                                                                        I don’t think it is coincidence that so many folks are using i3, dwm and StumpWM now — they really are better than the desktop environments.

                                                                        They are, but it’s also really disappointing. The fact that tiling a bunch of VT-220s on a monitor is substantially better, or at least a sufficiently good alternative for so many people, to GUIs developed 40 years after the Xerox Star, really says a lot about the quality of said GUIs.

                                                                        But, for what it’s worth, I don’t think I know anyone who used to use Linux or a BSD, and I have been using Linux solely for almost 22 years now.

                                                                        This obviously varies a lot, I don’t wanna claim that what I know is anything more than anecdata. But e.g. everyone in what used to be my local LUG has a Mac now. Some of them use Windows with Cygwin or WSL, mostly because they still use some old tools they wrote or their fingers are very much used to things like bc. I still run Linux and OpenBSD on most of my machines, just not the one I generally work on, that’s a Mac, and I don’t like it, I just dislike it the least.

                                                                      2. 1

                                                                        That churn is extremely superficial, though. I can work comfortably on anything from twm to latest ubuntu.

                                                                      3. 9

                                                                        I do have a linux machine for my work stuff running KDE. And I love the amount of stuff I can customize, hotkeys that can be changed out of the box, updates I can control etc.

                                                                        But if you get windows to run in a stable manner (look out for updates, disable fast start/stop, disable some annoying services, get a professional version so it allows you to do that, get some additions for a tabbed explorer, remove all them ugly tiles in the start menu, disable anything that has “cortana” in its name and forget windows search), then you will have a better experience on windows. You’ll not have to deal with broken GPU drivers, you’ll not have to deal with broken multi-display multi-DPI stuff, which includes no option to scale differently, display switching crashing your desktop, laptops going back to sleep because you were too fast in closing its lid on bootup when you connected an external display. You’ll not have to deal with your pricey GPU not getting used for video encoding and decoding. Browsers not using hardware acceleration and rendering 90% on the CPU. Games being broken or not using the GPU fully. Sleep mode sometimes not waking up some PCIE device, leading to a complete hangup of the laptop. So the moment you actually want to use your hardware fully, maybe even game on that and do anything that is more than a 1 display system with a CPU, you’ll be pleased to use windows. And let’s not talk about driver problems because of some random changes in linux that breaks running a printer+scanner via USB. That is the sad truth.

                                                                        Maybe Wayland will change at least the display problems, but that doesn’t fix anything regarding broken GPU support. And no matter whose fault it is, I don’t buy a PC for 1200€, just so I can watch my PC trying to render my desktop in 4k on the CPU, tearing in videos and random flickering when doing stuff with blender. I’m not up to tinkering with that, I want to tinker with software I built, not with some bizarre GPU driver and 9k Stackoverflow/Askubuntu/Serverfault entries of people who all can’t do anything, because proprietary GPU problems are simply a blackbox. I haven’t had any bluescreen in the last 5 years except one, and that was my fault for overloading the VRAM in windows.

                                                                        And at that point WSL2 might actually be a threat, because it might allow me to just ditch linux on my box entirely and get the good stuff in WSL2 but remove the driver pain (while the reverse isn’t possible). Why bother with dual boot or two machines if you can use everything with a WSL2 setup. It might even fix the hardware acceleration problem in linux, because windows can just hand over a virtualized GPU that uses the real one underneath using the official drivers. I won’t have to tell people to try linux on the desktop, they can just use WSL2 for the stuff that requires it and leave the whole linux desktop on the side, along with all the knowledge of installing it or actually trying out a full linux desktop. (I haven’t used it at this point) What this will do is remove momentum and eventually interest from people to get a good linux desktop up and running, maybe even cripple the linux kernel in terms of hardware support. Because why bother with all those devices if you’re reduced to running on servers and in a virtualized environment of windows, where all you need are the generic drivers.

                                                                        I can definitely see that coming. I’ve used linux primarily pre corona, and now that I’m home most of them time I’m dreading to start my linux box.

                                                                        1. 1

                                                                          look out for updates

                                                                          What do you mean by this? Are you saying I should manually review and read about every update?

                                                                          disable fast start/stop

                                                                          Done

                                                                          disable some annoying services

                                                                          I’m curious which ones but I think I disabled most of them.

                                                                          get a professional version so it allows you to do that

                                                                          Windows 10 enterprise.

                                                                          get some additions for a tabbed explorer

                                                                          Can you recommend some?

                                                                          remove all them ugly tiles in the start menu, disable anything that has “cortana” in its name and forget windows search)

                                                                          Done and done and done

                                                                          broken GPU drivers

                                                                          I haven’t had to deal with this yet, but I’ve had multiple instances where USB stopped working, bluetooth stopped working and the dock stopped working which previously worked before a windows update but then required me to manually update the drivers after the windows update to get it working again.

                                                                          multi-display multi-DPI

                                                                          I don’t think there currently exists any non-broken multi-DPI solution on windows or any other platform and so I avoid having this problem in the first place. The windows solution to this problem is just as bad as the wayland one. You can’t solve this problem if you rasterize before knowing where the pixels will end up being. You would need a new model for how you describe visuals on a screen which would be vector graphics oriented.

                                                                          display switching crashing your desktop, laptops going back to sleep because you were too fast in closing its lid on bootup when you connected an external display

                                                                          I have had the first one happen a few times on windows, the second issue is something I don’t run into since I don’t currently run my laptop with the lid closed while using external displays, but it’s something I’ve planned to move to. I’ve been procrastinating moving to this setup because of the number of times I’ve seen it break for coworkers (running the same hardware and software configuration). I’ve never had a display switch crash anything on linux, although I’ve had games cause X to crash but at least I had a debug log to work from at that point and could at least see if I can do something about it.

                                                                          Games being broken or not using the GPU fully.

                                                                          Gaming on linux, if you don’t mind doing an odd bit of tinkering, has certainly been a lot less stressful than gaming on windows, which works fine until something breaks and then there’s absolutely zero information which is available to fix it. It’s not ideal but I play VR games on linux, I take advantage of my hardware, it’s a very viable platform especially when I don’t want to deal with the constant shitty mess of windows. I’ve never heard of a game not using the GPU fully (when it works).

                                                                          So the moment you actually want to use your hardware fully, maybe even game on that and do anything that is more than a 1 display system with a CPU, you’ll be pleased to use windows.

                                                                          I use windows and linux on a daily basis. I’m pleased to use linux, I sometimes want to change jobs because of having to use windows.

                                                                          And let’s not talk about driver problems because of some random changes in linux that breaks running a printer+scanner via USB.

                                                                          Or when you update windows and your printer+scanner no longer works. My printing experience on linux has generally been more pleasant than linux because printers don’t suddenly become bricks just because microsoft decides to force you to update to a new version of windows overnight.

                                                                          Printers still suck (and so do scanners) but I’ve mitigated most problems by sticking to supported models (of which there are plenty of good online databases).

                                                                          1. 1

                                                                            I don’t think there currently exists any non-broken multi-DPI solution on windows or any other platform and so I avoid having this problem in the first place. The windows solution to this problem is just as bad as the wayland one. You can’t solve this problem if you rasterize before knowing where the pixels will end up being. You would need a new model for how you describe visuals on a screen which would be vector graphics oriented.

                                                                            I have no problems moving windows between HighDPI and normal 1080p displays on windows. WIndows 11 will fix a lot of the multi-screen issues of moving windows to the wrong display.

                                                                            Meanwhile my Linux-Box can’t even render videos on 4k due to missing hardware acceleration (don’t forget the tearing). And obviously it’s not capable of different scaling between the HighDPI and the 1080p display. Thus it’s a blurry 2k res on a 4k display. And after logging into the 2k screen, my whole plasmashell is crashed, which is why I’ve got a bash command hotkey to restart it.

                                                                            I haven’t had to deal with this yet, but I’ve had multiple instances where USB stopped working, bluetooth stopped working and the dock stopped working which previously worked before a windows update but then required me to manually update the drivers after the windows update to get it working again.

                                                                            I never had any broken devices after an update or a malfunctioning system. Only one BSOD directly after an upgrade, which fixed itself with a restart.

                                                                            I’ve never heard of a game not using the GPU fully

                                                                            Nouveau is notorious for not being able to control clock speed and the drivers not being able to use as much capacity. Fixing a bad GPU driver on linux had me reinstalling the whole OS multiple times.

                                                                            1. 2

                                                                              I have no problems moving windows between HighDPI and normal 1080p displays on windows. WIndows 11 will fix a lot of the multi-screen issues of moving windows to the wrong display.

                                                                              Same experience here. I tried using a Linux + Windows laptop for 7 months or so. Windows mixed DPI support is generally good, including fractional scaling (which is what you really want on a 14” 1080p screen). The exception are some older applications, which have blurry fonts. Mixed DPI on macOS is nearly flawless.

                                                                              On Linux + GNOME it is ok if you use Wayland and all your screens use integer scaling. It all breaks down once you use fractional scaling. X11 applications are blurry (even on integer-scaled screens) because they are scaled up. Plus rendering becomes much slower with fractional scaling.

                                                                              Meanwhile my Linux-Box can’t even render videos on 4k due to missing hardware acceleration (don’t forget the tearing).

                                                                              I did get it to work, both on AMD and NVIDIA (proprietary drivers). But it pretty much only works on applications that have good support for VA-API (e.g. mpv) or NVDEC and to some extend with Firefox (you have to enable enable experimental options, force h.264 on e.g. youtube, and it crashes more often). With a lot of applications, like Zoom, Skype, or Chrome, rendering happens on the CPU and it blows away your battery life and you have constantly spinning fans.

                                                                              1. 1

                                                                                Yeah the battery stuff is really annoying. I really hope wayland will finally take over everything and we’ll have at least some good scaling. Playback on VLC works, but I actually don’t want to have to download everything to play it smoothly, so firefox would have to work first with that.. (And for movie streaming you can’t download stuff.)

                                                                              2. 1

                                                                                I have no problems moving windows between HighDPI and normal 1080p displays on windows. WIndows 11 will fix a lot of the multi-screen issues of moving windows to the wrong display.

                                                                                If you completely move a window between two displays, the problem is easy-ish to solve with some hacks, it’s easier to solve if your dpi is a multiple of the other dpi. And issues especially occur when windows straddle the screen boundary. Try running a game across two displays on a multi-dpi setup, you will either end up with half the game getting downscaled from 4k (which is a waste of resources and your gpu probably can’t handle that at 60fps) or you end up with a blurry mess on the other screen. When I did use multi-dpi on windows as recent as windows 10 there were still plenty of windows core components which would not render correctly when you did this. You would either get blurryness or just text rasterization which looked off.

                                                                                But like I said, this problem is easily solved by not having a multi-dpi setup. No modern software fully supports this properly, and no solution is fully seamless, just because YOU can’t personally spot all the problems doesn’t mean that they don’t exist. Some people’s standards for “working” are different or involve different workloads.

                                                                                Meanwhile my Linux-Box can’t even render videos on 4k due to missing hardware acceleration (don’t forget the tearing).

                                                                                Sounds like issues with your configuration. I run 4k videos at 60Hz with HDR from a single board computer running linux, it would run at 10fps if it had to rely solely on the CPU. It’s a solved problem. If you’re complaining because it doesn’t work in your web browser, I can sympathise there, but that’s not because there’s no support for it, it’s just that by default it’s disabled (at least on firefox) for some reason. You can enable it by following a short guide in 5 minutes and never have to worry about it again. A small price to pay for an operating system that actually does what you ask it to.

                                                                                And obviously it’s not capable of different scaling between the HighDPI and the 1080p display. Thus it’s a blurry 2k res on a 4k display.

                                                                                Wayland does support this (I think), but like I said, there is no real solution to this which wouldn’t involve completely redesigning everything including core graphics libraries and everyone’s mental model of how screens work.

                                                                                Really, getting hung up on multi-dpi support seems a little bit weird. Just buy a second 4k display if you care so much.

                                                                                And after logging into the 2k screen, my whole plasmashell is crashed, which is why I’ve got a bash command hotkey to restart it.

                                                                                Then don’t use plasma.

                                                                                At least on linux you get the choice not to use plasma. When windows explorer has its regular weekly breakage the only option I have is rebooting windows. I can’t even replace it.

                                                                                Heck, if you are still hung up on wanting to use KDE then fix the bug. At least with linux you have the facilities to do this. When bugs like this appear on windows (especially when they only affect a tiny fraction of users) there’s no guarantee when or if it will be fixed. I don’t keep track but I’ve regularly encountered dozens of different bugs in windows over the course of using it for the past 15 years.

                                                                                I never had any broken devices after an update or a malfunctioning system. Only one BSOD directly after an upgrade, which fixed itself with a restart.

                                                                                Good for you. My point is that your experience is not universal and that there are people for whom linux breaks a lot less than windows. You insisting this isn’t the case won’t make it so.

                                                                                Nouveau is notorious for not being able to control clock speed and the drivers not being able to use as much capacity.

                                                                                Which matters why?

                                                                                If someone wrote a third party open source nvidia driver for windows would you claim that windows can’t take full advantage of hardware? What kind of argument is this?

                                                                                Nouveau is one option, it’s not supported by nvidia, no wonder it doesn’t work as well when it’s based on reverse engineering efforts. However, this would only be a valid criticism if there were not nvidia supported proprietary nvidia gpu drivers for linux which worked just fine. If you want a better experience with open source drivers then pick hardware which has proper linux support like intel or amd gpus. I’ve ran both and although I now refuse to buy nvidia on the principle that they just refuse to try to cooperate with anyone, it actually worked fine for over 5 years of linux gaming.

                                                                                1. 5

                                                                                  I agree with a lot of your post, so I’m not going to repeat that (other than adding a strong +1 to avoiding nvidia on that principle), but I want to call out this:

                                                                                  Really, getting hung up on multi-dpi support seems a little bit weird. Just buy a second 4k display if you care so much.

                                                                                  It may not be a concern to you, but that doesn’t mean it doesn’t affect others. There are many cases where you’d have displays with different densities, and two different-density monitors is just one. Two examples that I personally have:

                                                                                  1. My work macbook has a very high DPI display, but if I want more screen space while working from home, I have to plug in one of my personal 24” 1080p monitors. The way Apple do the scaling isn’t the best, but different scaling per display is otherwise seemless. Trying to do that with my Linux laptop is a mess.
                                                                                  2. I have a pen display that is a higher density than my regular monitors. It’s mostly fine since you use it up-close, but being able to bump it up to 125% or so would be perfect. That’s just not a thing I can do nicely on my Linux desktop. I’m planning to upgrade it at some point soon to one that’s even higher density, where I’m guessing 200% scaling would work nicely, but I may end up stuck having to boot into Windows to use it at all.

                                                                                  There are likely many other scenarios where it’s not “simply” a case of upgrading a single monitor, but also, the “Just buy [potentially very expensive thing]” argument is incredibly weak and dismissive in its own right.

                                                                                  1. 1

                                                                                    My work macbook has a very high DPI display, but if I want more screen space while working from home, I have to plug in one of my personal 24” 1080p monitors. The way Apple do the scaling isn’t the best, but different scaling per display is otherwise seemless. Trying to do that with my Linux laptop is a mess.

                                                                                    I get that, but my point is that you can just get a second 1080p monitor and close your laptop. Or buy two high DPI monitors.

                                                                                    Really, the problem I have with this kind of criticism is that although valid, I would rather have some DPI problems and a slightly ugly UI because I had to display 1080p on a 4k display than have all the annoying problems I have with windows, especially when I have actual work to do. It’s incredibly stressful to have the hardware and software I am required by my company to use cause hours of downtime or work lost per week. With linux, there is a lot less stress, I just have to be cognizant of making the right hardware buying decisions.

                                                                                    I have a pen display that is a higher density than my regular monitors. It’s mostly fine since you use it up-close, but being able to bump it up to 125% or so would be perfect. That’s just not a thing I can do nicely on my Linux desktop. I’m planning to upgrade it at some point soon to one that’s even higher density, where I’m guessing 200% scaling would work nicely, but I may end up stuck having to boot into Windows to use it at all.

                                                                                    I think you should try wayland. It can do scaling and I think I have even seen it work (about as well as multi-dpi solutions can work given the state of things).

                                                                                    If you are absolutely stuck on X there are a couple of workarounds, one is launching your drawing application at a higher DPI. It won’t change if you move it to a different screen but it is not actually that big of a hack and will probably solve your particular problem. I even found a reddit post for it: https://old.reddit.com/r/archlinux/comments/5x2syg/multiple_monitors_with_different_dpis/

                                                                                    The other hack is to run 2 X servers but that’s really unpleasant to work with. But since you are using a specific application on that display this may work too.

                                                                                    potentially very expensive thing

                                                                                    If you’re dealing with a work mac, get your workplace to pay for it.

                                                                                    Enterprise Windows 10 licenses cost money too, not as much as good monitors, but they’re not an order of magnitude more expensive (although I guess it depends on if you buy them from apple).

                                                                                    1. 2

                                                                                      I get that, but my point is that you can just get a second 1080p monitor and close your laptop. Or buy two high DPI monitors.

                                                                                      Once again, the “just pay more money” is an incredibly dismissive and weak argument, unless you’re willing to start shelling out cash to strangers on the internet. If someone had the means and desire to do so, they obviously would have done so already.

                                                                                      I think you should try wayland. It can do scaling

                                                                                      Wayland may be suitable in my particular case (it’s not), but it’s also not near a general solution yet.

                                                                                      If you’re dealing with a work mac, get your workplace to pay for it.

                                                                                      I was using it as an example - forget I used the word “work” and it holds just as true. My current setup is “fine” for me, but I’m not the only person in the world with a macbook, a monitor, and a desire to plug the two together.


                                                                                      The entire point of my comment wasn’t to ask for solutions to two very specific problems I personally have; it was to point out that you’re being dismissive of issues that you yourself don’t have, while also pointing out that someone else’s issues are not everyone’s. To use your own words, “My point is that your experience is not universal”.

                                                                                      1. 0

                                                                                        Once again, the “just pay more money” is an incredibly dismissive and weak argument, unless you’re willing to start shelling out cash to strangers on the internet. If someone had the means and desire to do so, they obviously would have done so already.

                                                                                        No, actually, let’s bring this thread back to it’s core.

                                                                                        Some strangers on the internet (not you) are telling me that windows is so great and that it will solve all my problems, or that linux has massive irredeemable problems and then proceed to list “completely fucking insignificant” (in my opinion) UI and scaling issues compared to my burnout inducing endless hell of windows issues. Regarding the problems they claim it solves: it either doesn’t solve (because they don’t exist on linux so there is nothing to solve), or are not things that windows solves to my satisfaction, or are not things I consider problems at all (and in multiple cases, I don’t think that’s just me, I think the person is just mislead as to either the definition of a linux problem or just has a unique bad experience).

                                                                                        What’s insulting is the rest of this thread (not you) of people who keep telling me how wrong I am about my consistent negative experience with windows and positive experience with linux and how amazing windows is because you can play games with intrusive kernel mode anti cheat as if not being able to run literal malware is one of the biggest problems I should, according to them, be having with linux.

                                                                                        My needs are unconventional, they are not met in an acceptable manner by windows. I started off by saying “I’m glad windows works for some people, but it doesn’t work for me.” I wish people actually read that part before they started listing off how windows can solve all my problems. I use windows on a daily basis and I hate it.

                                                                                        So really, what is “an incredibly dismissive and weak argument” is people insisting that the solutions that work for me are somehow not acceptable when I’m the only one who has to accept them.

                                                                                        I am not surprised you got turned around and started thinking that I was trying to dismiss other people’s experiences with windows in linux because that’s what it would look like if you read this thread as me defending linux as a viable tool for everyone. It is not, I am simply defending linux as a viable tool for me.

                                                                                  2. 3

                                                                                    I don’t want to use things on multiple screens at the same time, I want them to be able to move across different displays while changing their scaling accordingly.. And that is already something I want when connecting one display to one laptop, you don’t want your 1080p laptop scaled like your 1080p display. And I certainly like writing on higher-res displays for work.

                                                                                    When I did use multi-dpi on windows as recent as windows 10 there were still plenty of windows core components which would not render correctly when you did this. You would either get blurryness or just text rasterization which looked off.

                                                                                    Which are 0 of my daily drivers. Not browsers, explorer, taskmanager, telegram, discord, steam, VLC, VS, VSCode..

                                                                                    Then don’t use plasma

                                                                                    And then what ? i3 ? gnome ? Could just use apple, they have a unix that works at least. “just exchange the whole desktop experience and it might work again”, sounds like a nice solution.

                                                                                    When bugs like this appear on windows (especially when they only affect a tiny fraction of users) there’s no guarantee when or if it will be fixed.

                                                                                    And on linux you’ll have to pray somebody hears you in the white noise of people complaining and actually fixes stuff for you, and doesn’t leave it for years as an bug report in a horrible bugzilla instance. Or you just start being the expert yourself, which is possible if you’ve got nothing to do. (And then have fun bringing that fix upstream.) It’s not that simple. It’s nice to have the possibility of recompiling stuff yourself, but that doesn’t magically fix the problem nor gives your the knowledge how to do so.

                                                                                    You insisting this isn’t the case won’t make it so.

                                                                                    And that’s where I’m not sure it’s worth to discuss any further. Because you’re clearly down-sizing linux GPU problems to “just tinker with it/just use wayland even if it breaks many programs” while complaining about the same on windows. My experience may be different to yours, but the comments and votes here, plus my circle of friends (and many students at my department) are speaking for my experience. One were people complain about windows and hate it its update policy. But love it for simply working with games(*), scaling where linux fails flat on its head and other features. You seem to simply ignore everyone that doesn’t want to tinker around with their GPU setup. No your firefox won’t be able to do playback on 4k screen out of the box, it’ll do that on your CPU by default. We even had submissions here about how broken those interfaces are, so firefox and chrome disabled their support on linux for GPU acceleration and only turned it back on for some card after some time. Seems to be very stable..

                                                                                    I like linux, but I really dread its shortcomings for everything that is consumer facing and not servers I can hack with and forget about UIs. And I know for certain how bad windows can be. I’ve set up my whole family on linux, so it can definitely work. I only have to explain to them again why blender on linux may just crash randomly.

                                                                                    (*) Yes, all of them, including anti-cheats, which won’t work on linux or you’ll gamble when they will bann you. I know some friends running a hyperv-emulation in KVM to get them to run on rainbow..

                                                                                    1. 1

                                                                                      taskmanager

                                                                                      The fact that taskmanager is one of your daily driver applications is quite funny.

                                                                                      … VS, VSCode

                                                                                      I certainly use more obscure applications than these, so it explains why I have more obscure problems.

                                                                                      And then what ? i3 ? gnome ? Could just use apple, they have a unix that works at least. “just exchange the whole desktop experience and it might work again”, sounds like a nice solution.

                                                                                      KDE has never been the most stable option, it has usually been the prettiest though. I’m sorry about the issues you’re having but really at least you have options unlike on windows.

                                                                                      And on linux you’ll have to pray somebody hears you in the white noise of people complaining and actually fixes stuff for you, and doesn’t leave it for years as an bug report in a horrible bugzilla instance. Or you just start being the expert yourself, which is possible if you’ve got nothing to do. (And then have fun bringing that fix upstream.) It’s not that simple. It’s nice to have the possibility of recompiling stuff yourself, but that doesn’t magically fix the problem nor gives your the knowledge how to do so.

                                                                                      You have to pray someone hears you regardless. The point is that on linux you can actually fix it yourself, or switch the component out for something else. On windows you don’t have either option.

                                                                                      And then have fun bringing that fix upstream.

                                                                                      Usually much easier than trying to get someone else to fix it. Funnily enough projects love bug fixes.

                                                                                      It’s not that simple.

                                                                                      I’ll gladly take not simple over impossible any day.

                                                                                      And that’s where I’m not sure it’s worth to discuss any further. Because you’re clearly down-sizing linux GPU problems to “just tinker with it/just use wayland even if it breaks many programs” while complaining about the same on windows.

                                                                                      I genuinely have not had this mythical gpu worst case disaster scenario you keep describing. So I’m not “down-sizing” anything, I am just suggesting that maybe it’s your own fault. Really, I’ve used a very diverse set of hardware over the past few years. The point I’ve been making repeatedly is that “tinkering” to get something to work on linux is far easier than “copy pasting random commands from blog posts which went dead 10 years ago until something works” on windows. When things break on linux it’s a night and day difference in debugging experience compared to windows, and you do need to know a little bit about how things work, but I’ve used windows for longer than I have used linux and I know less about how it works despite my best efforts to learn.

                                                                                      Your GPU problems seem to stem from the fact that you are using nouveau. Stop using nouveau. It won’t break anything, it will just mean you can stop complaining about everything being broken. It might even fix your plasma crashes when you connect a second monitor.

                                                                                      My experience may be different to yours, but the comments and votes here, plus my circle of friends (and many students at my department) are speaking for my experience.

                                                                                      I could also pull out a large suite of anecdotes but really that won’t make an argument, so maybe let’s not go there?

                                                                                      But love it for simply working with games(*),

                                                                                      Some games not working on linux is not a linux problem. Despite absolute best efforts by linux users to make it their problem. Catastrophically anti-consumer and anti-privacy anti-cheat solutions are not something you can easily make work on linux for sure, but I’m not certain I want it to work.

                                                                                      scaling where linux fails flat on its head

                                                                                      I’ll take some scaling issues and being able to actually use my computer and get it to do what I want over work lost, time lost and incredible stress.

                                                                                      No your firefox won’t be able to do playback on 4k screen out of the box, it’ll do that on your CPU by default.

                                                                                      Good to know you read the bit of my comment where I already addressed this.

                                                                                      Seems to be very stable..

                                                                                      Okay, at this point you’re close to just being insulting. Let me spell it out for you:

                                                                                      Needing to configure firefox to use hardware acceleration, not having a hacky automatic solution for multi-DPI on X, not being able to play games which employ anti-cheat solutions which orwell couldn’t imagine, some UI inconsistencies, having to tinker sometimes. These are all insignificant problems compared to the issues I have with windows on a regular basis. You said it yourself, you use a web browser, two web browser based programs, 3 programs developed by microsoft to work on windows (although that’s never stopped them from being broken for me) and a media player which statically links mplayer libraries which were developed not for windows and a chat client. Your usecase is vanilla.

                                                                                      My daily driver for work is running VMWare workstation running on average about 3 VMs, firefox, emacs, teams, outlook, openvpn, onenote. I sometimes also have to run a gpu accellerated password cracker. For everything else I use a linux vm running arch and i3 because it’s so much faster to actually get shit done. Honestly my usecase isn’t that much more exciting either. I have daily issues with teams, outlook, and onenote (but those are not windows issues, it’s just that microsoft can’t for the life of them write anything that works). The windows UI regularly stops working after updates (I think this is due to the strict policies applied on the computer to harden it, these were done via group policy). The windows UI regularly crashes when connecting and disconnecting a thunderbolt dock. I have suspend and resume issues all the time, including issues where the machine will bluescreen coming out of suspend when multiple VMs are running. VM hardware passthrough has a tendency to be regularly broken requiring a reboot.

                                                                                      To top it off, the windows firewall experience is crazy, even if it has application level control, I still can’t understand why you would want something so confusing to configure.

                                                                                      And I know for certain how bad windows can be.

                                                                                      And I think you’re used to it, to the point that you don’t notice it. The fact that linux is bad in different ways doesn’t necessarily mean it’s as bad.

                                                                                      or you’ll gamble when they will bann you

                                                                                      Seems illegal. Maybe don’t give those companies money?

                                                                                  3. 1

                                                                                    All that comes obviously with the typical Microsoft problems. Like account bindings of your license, while 2FA may even make it harder to get your account back, because apparently not using your license account primarily on windows is weird and 2FA prevents them from “unlocking” your account again.

                                                                                    The same goes for all the tracking, weird “Trophies” that are now present and stuff like that. But not having to tinker with GPU stuff (and getting a system that has no desktop anymore at 3AM) is very appealing.

                                                                                    Can you recommend some?

                                                                                    http://qttabbar.sourceforge.net/ works ok.
                                                                                    Installed 2012 on windows 7, haven’t reinstalled my windows since, program still works except for 1-2 quirks.

                                                                            1. 4

                                                                              There’s so much wrong with this article I don’t know where to start.

                                                                              “lisp-1 vs lisp-2”? One of the things that lispers forever ado about.

                                                                              I guess this depends on who you talk to–on the whole for lispers the only people who don’t consider lisp-2 to be a mistake are the hardcore CL fans. Emacs Lisp is the only other lisp-2 with a large userbase, and if you talk to elisp users, most of them are annoyed or embarrased about elisp being a lisp-2. If you look at new lisps that have been created this century, the only lisp-2 you’ll find is LFE.

                                                                              Not a Important Language Issue […] For another example, consider today’s PHP language. Linguistically, it is one of the most badly designed language, with many inconsistencies, WITH NO NAMESPACE MECHANISM, yet, it is so widely used that it is in fact one of the top 5 most used languages.

                                                                              You can use this same argument to justify classifying literally any language issue as unimportant. This argument is so full of holes I’m honestly kind of annoyed at myself at wasting time refuting it.

                                                                              Now, as i mentioned before, this (single/multi)-value-space issue, with respect to human animal’s computing activities, or with respect to the set of computer design decisions, is one of the trivial, having almost no practical impact.

                                                                              Anyone who has tried to use higher-order functions in emacs lisp will tell you this is nonsense. Having one namespace for “real data” and another namespace for “functions” means that any time you try to use a function as data you’re forced to deal with this mismatch that has no reason to exist.

                                                                              I could go on but I won’t because if I were to find all the mistakes in this article I’d be here all day.

                                                                              1. 9

                                                                                I guess this depends on who you talk to–on the whole for lispers the only people who don’t consider lisp-2 to be a mistake are the hardcore CL fans.

                                                                                This only is doing a lot of work here, given that CL is where the majority of practice happens in the (admittedly tiny) Lisp world.

                                                                                1. 4

                                                                                  I know anecdote is not data but I know far more people who work at Clojure shops than I do Common Lisp shops. How would we quantify “majority of practice?”

                                                                                  1. 2

                                                                                    It’s more to do with whether qualify Clojure as a dialect of Java or a dialect of Lisp.

                                                                                    Clojure proclaims itself a dialect of Lisp while maintaining largely Java semantics.

                                                                                    1. 2

                                                                                      CL programmers are so predictable with their tedious purity tests. I wish they’d move on past their grudges.

                                                                                      1. 3

                                                                                        Dude you literally wrote a purity rant upthread.

                                                                                        1. 2

                                                                                          Arguing about technical merits is different from regurgitating the same tired old textbook No True Scotsman.

                                                                                          1. 3

                                                                                            Look, (like everyone else) I wrote a couple Scheme interpreters. I worked on porting a JVM when Sun was still around. I did a JVM-targeting “Lisp-like” language compiler and even was paid for doing it. I look on Clojure and immediately see all the same warts and know precisely why they are unavoidable. I realize some people look at these things and see Lisp lineage, but I can’t help seeing some sort of Kotlin with parens through it.

                                                                                            And it’s not just me really: half of the people who sat on RxRS were also on X3J13, and apparently noone had a split personality. So no need to be hostile about technical preferences of others. When you talk to your peers it helps to build a more complicated theory of mind than “they are with me or they are wrong/malicious”.

                                                                                            1. 2

                                                                                              Sure, you can have whatever prreferences you want. But if you go around unilaterally redefining terms like “lisp” and expecting everyone to be OK with it, well, that’s not going to work out so well.

                                                                                              1. 2

                                                                                                If you hang around long enough you hear people calling about anything as “Lisp-like”. Forth, Python, Javascript, Smalltalk, you name it. Clojure is a rather major departure from lisps both in syntax and semantics, so this is not a super unusual point.

                                                                                2. 6

                                                                                  on the whole for lispers the only people who don’t consider lisp-2 to be a mistake are the hardcore CL fans.

                                                                                  That folks who use a Lisp-1 prefer a Lisp-1 (to the extent that non-Common Lisp, non-Emacs Lisp Lisp-like languages such as Scheme or Closure can fairly be termed ‘Lisps’ in the first place) is hardly news, though, is it? ‘On the whole, for pet owners the only people who don’t consider leashes to be a mistake are the hardcore dog owners.’

                                                                                  Emacs Lisp is the only other lisp-2 with a large userbase, and if you talk to elisp users, most of them are annoyed or embarrased about elisp being a lisp-2.

                                                                                  Is that actually true? If so, what skill level are these users?

                                                                                  For my own part, my biggest problem with Emacs is that it was not written in Common Lisp. And I think that Lisp-N (because Common Lisp has more than just two namespaces, and users can easily add more) is, indeed, preferable to Lisp-1.

                                                                                  1. 4

                                                                                    Is that actually true? If so, what skill level are these users?

                                                                                    This is based on my experience of participating in the #emacs channel since 2005 or so. The only exceptions have been people coming to elisp from CL. This has held true across all skill levels I’ve seen including maintainers of popular, widely-used packages.

                                                                                  2. 4

                                                                                    I dunno. I think the article is a bit awkward but I think the author is absolutely right: in practice, to the language user, it doesn’t really make a difference.

                                                                                    I am a full-time user of a lisp-1. When I use it, I appreciate the lack of sharps and the like for when it’s time to use higher order functions or call variables as functions. The same language has non-hygienic macros, which Dick Gabriel rather famously claimed more or less require a separate function namespace, and have almost never found my macro usage to be hampered.

                                                                                    At the same time, I was for three years a professional user of Elixir, a language with both syntactic macros and separate namespaces. I found it mildly convenient that I could declare an variable without worrying about shadowing a function, and never found the syntax for function reference or for invoking variables as funs to be particularly burdensome at all.

                                                                                    To the user, it really doesn’t have to matter one way or the other.

                                                                                  1. 1

                                                                                    I thought Gnome 40 was already considered stable. Why are new distro releases still shipping 3.x?

                                                                                    1. 4

                                                                                      Because Gnome 40 was released after Debian 11 features freeze.

                                                                                      1. 3

                                                                                        That’s right: bullseye soft freeze was February, GNOME 40 released in March.

                                                                                        IMHO (as a Debian developer) we should have delayed the soft freeze and got 40 into bullseye, if there was sufficient confidence that 40 really was stable enough (we’d have had to evaluate that before 40 actually shipped)

                                                                                        1. 1

                                                                                          Does gnome not make it into back ports?

                                                                                        2. 1

                                                                                          FWIW (not much, I know!), I think that this is exactly the right approach to take. There’s always one more update, one more feature. If you are aiming for stable releases (and I think Debian should), then you gotta draw a line in the sand at some point.

                                                                                          I love that Debian is run so well. I only wish more projects had a similar, healthy respect for stability.

                                                                                      1. 5

                                                                                        Hi! soatok!

                                                                                        As always, great post!

                                                                                        I’m one of the people involved in DSSE, so I wanted to share a little bit more about the rationale:

                                                                                        1. It’s been incredibly hard to avoid people from actually designing things using JOSE, yet I’m strongly against such a footgun…
                                                                                        2. One of the reasons for people to not move into PASETO seems PAE itself, it requiring dealing with binary data and bitflipping here and there.
                                                                                        3. Ironically, it appears that a big driver to not adopt either PASETO (or DSSE for that matter) is that it hasn’t been “blessed” by the IETF…

                                                                                        I wonder what your take is about these things.

                                                                                        ETA: we do call DSSE Dizzy ourselves :)

                                                                                        1. 6
                                                                                          1. I didn’t even include JOSE in the article because I don’t want to accidentally lend credibility to it. If I see JOSE in an engagement, I brace for the worst. If I see DSSE, I calmly proceed.
                                                                                          2. I find that surprising, but good to know.
                                                                                          3. This doesn’t surprise me at all.

                                                                                          My remarks about DSSE leaving me dizzy were mostly seeing “Why not PASETO? Too opinionated” then “Why PAE? It’s good enough and well documented” but then not using PAE (which, IIRC, was a PASETO acronym). It’s not that you’re wrong, just that it’s confusing. I think something important got lost in the editorial process, but still exists inside the designers’ heads.

                                                                                          The only thing that I really dislike about DSSE is that you support, but never authenticate, some of your AAD.

                                                                                          Specifically KEYID. I understand the intent here (it’s spelled out clearly in the docs), but even if it’s never meant to be used for any sort of security consideration, the fact that you’re giving any flex at all over what key goes into envelope verification–but never requiring users to commit that value to the signature–seems like a miss to me. PASETO has uenncrypted footers, but it’s still used in the MAC/signature calculation.

                                                                                          Any attack based on swapping between multiple valid keys becomes significantly easier if the identifier for said key is never committed. The README remark about exclusive ownership seems to hint awareness of this concern, but maybe the dots hadn’t been connected?

                                                                                          Having some mechanism of committing the signatures on the envelope to a given signature algorithm and/or public key seems like a good way to mitigate. You can include this in the signature calculation without storing it in the envelope, by the way.

                                                                                          Sophie Schmieg is fond of opining that (paraphrasing) cryptography keys aren’t merely byte strings, they’re byte strings plus configuration.

                                                                                          RSASSA-PSS with e=65537, MGF1+SHA256 and SHA256 is a very specific configuration for RSA. If I yeet a PEM-encoded RSA public key at you (which contains only (n, e) in its contents), what’s stopping me from using PKCS#1 v1.5?

                                                                                          Same thing with ECDSA with named curves and not reimplementing CVE-2020-0601.

                                                                                          None of what I said is really a vulnerability with DSSE, necessarily, but leaves room for things to go wrong.

                                                                                          Thus, if I were designing DSSE-v2, I’d make the following changes:

                                                                                          1. Always include KEYID in the tag calculation, and if it’s not there, include a 0 length. It’s a very cheap change to the protocol.
                                                                                          2. Include some representation of the public key (bytes + algorithm specifics) in the signature calculation. I wouldn’t store it in the envelope though (that might invite folks to parse it from the message).

                                                                                          This is a small tweak to what DSSE-v1 does, but it will provide insurance against implementation failure (provided a collision-resistant hash function is being consistently used).

                                                                                          ETA: Her exact words were “A key should always be considered to be the raw key material alongside its parameter choices.”

                                                                                          1. 5

                                                                                            I didn’t even include JOSE in the article because I don’t want to accidentally lend credibility to it. If I see JOSE in an engagement, I brace for the worst. If I see DSSE, I calmly proceed.

                                                                                            Haha, I imagined as such! and I’l take it that as a cautious compliment.

                                                                                            I find that surprising, but good to know.

                                                                                            Yes, I personally don’t think that’s an end-all-be-all rationale, but you see how industry can be very capricious about these things…

                                                                                            This doesn’t surprise me at all.

                                                                                            Likewise, but it is rather frustrating to see how many admittedly bad cryptographic systems have been designed and endorsed this way (RFC4880 and the JOSE suite to name a few). I wonder what’s a way to move forward in this department (one would be to maybe beg scott for half a decade spent in IETF meetings? :P)

                                                                                            My remarks about DSSE leaving me dizzy were mostly seeing “Why not PASETO? Too opinionated” then “Why PAE? It’s good enough and well documented” but then not using PAE (which, IIRC, was a PASETO acronym). It’s not that you’re wrong, just that it’s confusing. I think something important got lost in the editorial process, but still exists inside the designers’ heads.

                                                                                            Fair enough! I think we are due a complete review of what we wrote in there. The very early implementations of DSSE were PASETO’s PAE verbatim….

                                                                                            The only thing that I really dislike about DSSE is that you support, but never authenticate, some of your AAD.

                                                                                            Specifically KEYID. I understand the intent here (it’s spelled out clearly in the docs), but even if it’s never meant to be used for any sort of security consideration, the fact that you’re giving any flex at all over what key goes into envelope verification–but never requiring users to commit that value to the signature–seems like a miss to me. PASETO has uenncrypted footers, but it’s still used in the MAC/signature calculation.

                                                                                            Most definitely, this is something that we wanted to deal with in a separate layer (that’s why the payload fields are so minimal). This separate layer being in-toto layout fields and TUF metadata headers. I’m still wary of this fact though, and I’d love to discuss more.

                                                                                            Any attack based on swapping between multiple valid keys becomes significantly easier if the identifier for said key is never committed. The README remark about exclusive ownership seems to hint awareness of this concern, but maybe the dots hadn’t been connected?

                                                                                            Agreed, this is something we spent some time thinking hard about, and although I don’t think I can confidently say “we have an absolute answer to this” it appears to me that verifying these fields on a separate layer may indeed avoid EO/DSKS-style sttacks…

                                                                                            Having some mechanism of committing the signatures on the envelope to a given signature algorithm and/or public key seems like a good way to mitigate. You can include this in the signature calculation without storing it in the envelope, by the way.

                                                                                            Absolutely! A missing piece here is that in TUF/in-toto we store the algorithm on a separate payload that contains the public keys (e.g., imagine them as parent certificates). This is something that we changed on both systems after a security review from Cure53 many-a-years ago (mostly, to avoid attacker-controlled crypto-parameter fields like in JWT).

                                                                                            Sophie Schmieg is fond of opining that (paraphrasing) cryptography keys aren’t merely byte strings, they’re byte strings plus configuration.

                                                                                            Hard agree!

                                                                                            RSASSA-PSS with e=65537, MGF1+SHA256 and SHA256 is a very specific configuration for RSA. If I yeet a PEM-encoded RSA public key at you (which contains only (n, e) in its contents), what’s stopping me from using PKCS#1 v1.5?

                                                                                            Exactly, we have seen this happen on and on, even in supposedly standardized algorithms (like you point out with 2020-0601 down below).

                                                                                            Same thing with ECDSA with named curves and not reimplementing CVE-2020-0601.

                                                                                            None of what I said is really a vulnerability with DSSE, necessarily, but leaves room for things to go wrong.

                                                                                            Absolutely, and part of me wonders how this plays in the “generalization” of the protocol would fare without all the implicit assumptions I outlined above. FWIW, I’d definitely give PASETO first-class consideration in any new system of mine.

                                                                                            Thus, if I were designing DSSE-v2, I’d make the following changes:

                                                                                            Always include KEYID in the tag calculation, and if it’s not there, include a 0 length. It’s a very cheap change to the protocol.

                                                                                            Definitely, duly noted, and I wonder how hard it’d be to actually make it in V1

                                                                                            Include some representation of the public key (bytes + algorithm specifics) in the signature calculation. I wouldn’t store it in the envelope though (that might invite folks to parse it from the message).

                                                                                            This may be a little bit more contentious, considering what I said above, but I do see the value in avoiding dependencies between layers. I’d also be less concerned about fixing something twice in both places…

                                                                                            This is a small tweak to what DSSE-v1 does, but it will provide insurance against implementation failure (provided a collision-resistant hash function is being consistently used).

                                                                                            Yup! then again I wonder what the delta between PASETO and this would be afterwards :) (modulo encryption, that is)

                                                                                            Lastly, I wanted to commend you (again) for your writing! I love your blog and how accessible it is to people through all ranges of crypto/security expertise!

                                                                                          2. 1

                                                                                            To avoid dealing with binary, why not just prepend the decimal length of data, followed by a colon? I think this approach originated with djb’s netstrings, and it was also adopted by Rivest’s canonical S-expressions.

                                                                                            It turns foo into 3:foo and concatenates bar, baz and quux into 3:bar3:baz4:quux. Easy to emit, easy to ingest.

                                                                                            Add on parentheses for grouping, and you have a general-purpose representation for hierarchical data …

                                                                                          1. 28

                                                                                            I just can’t shake the feeling that Kubernetes is Google externalizing their training costs to the industry as a whole (and I feel the same applies to Go).

                                                                                            1. 9

                                                                                              Golang is great for general application development, IME. I like the culture of explicit error handling with thoughtful error messages, the culture of debugging with fast unit tests, when possible, and the culture of straightforward design. And interfaces are great for breaking development into components which can be developed in parallel. What don’t you like about it?

                                                                                              1. 12

                                                                                                It was initially the patronizing quote from Rob Pike that turned me off Go. I’m also not a fan of gofmt [1] (and I’m not a fan of opinionated software in general, unless I’m the one controlling the opinions [2]). I’m also unconvinced about the whole “unit testing” thing [5]. Also, it’s from Google [3]. I rarely mention it, because it goes against the current zeitgeist (especially at the Orange Site), and really, what can I do about it?

                                                                                                [1] I’m sorry, but opening braces go on their own line. We aren’t developing on 24 line terminals anymore, so stop shoving your outdated opinions in my face.

                                                                                                [2] And yes, I realize I’m being hypocritical here.

                                                                                                [3] Google is (in my opinion, in case that’s not apparent) shoving what they want on the entire industry to a degree that Microsoft could only dream of. [4]

                                                                                                [4] No, I’m not bitter. Really!

                                                                                                [5] As an aside, but up through late 2020, my department had a method of development that worked (and it did not involve anything resembling a “unit test”)—in 10 years we only had two bugs get to production. In the past few moths there’s been a management change and a drastic change in how we do development (Agile! Scrum! Unit tests über alles! We want push button testing!) and so far, we’ve had four bugs in production.

                                                                                                Way to go!

                                                                                                I should also note that my current manager retired, the other developer left for another job, and the QA engineer assigned to our team also left for another job (but has since come back because the job he moved to was worse, and we could really use him back in our office). So nearly the entire team was replaced back around December of 2020.

                                                                                                1. 11

                                                                                                  I can’t even tell if this is a troll post or not.

                                                                                                  1. 1

                                                                                                    I can assure you that I’m not intentionally trolling, and those are my current feelings.

                                                                                                  2. 2

                                                                                                    I’m sorry, but opening braces go on their own line. We aren’t developing on 24 line terminals anymore, so stop shoving your outdated opinions in my face.

                                                                                                    I use a portrait monitor with a full-screen Emacs window for my programming, and I still find myself wishing for more vertical space when programming in curly-brace languages such as Go. And when I am stuck on a laptop screen I am delighted when working on a codebase which does not waste vertical space.

                                                                                                    Are you perhaps younger than I am, with very small fonts configured? I have found that as I age I find a need for large and larger fonts. Nothing grotesque yet, but I went from 9 to 12 to 14 and, in a few places, 16 points. All real 1/72” points, because I have my display settings configured that way. 18-year-old me would have thought I am ridiculous! Granted, you’ve been at your current employer at least 10 years, so I doubt you are 18🙂

                                                                                                    I’m also unconvinced about the whole “unit testing” thing … my department had a method of development that worked (and it did not involve anything resembling a “unit test”)—in 10 years we only had two bugs get to production. In the past few moths there’s been a management change and a drastic change in how we do development (Agile! Scrum! Unit tests über alles! We want push button testing!) and so far, we’ve had four bugs in production.

                                                                                                    I suspect that the increase in bugs has to do with the change in process rather than the testing regime. Adding more tests on its own can only lead to more bugs if either incorrect tests flag correct behaviour as bugs (leading to buggy ‘bugfixes,’ or rework to fix the tests), or if correct tests for unimportant bugs lead to investing resources inefficiently, or if the increased emphasis leads to worse code architecture or rework rewriting old code to conform to the old architecture (I think I covered all the bases here). OTOH, changing development processes almost inevitably leads to poor outcomes in the short term: there is a learning curve; people and secondary processes must adapt &c.

                                                                                                    That is worth it if the long-term outcomes are sufficiently better. In the specific case of unit testing, I think it is worth it, especially in the long run and especially as team size increases. The trickiest thing about it in my experience has been getting the units right. I feel pretty confident about the right approach now, but … ask me in a decade!

                                                                                                    1. 2

                                                                                                      Are you perhaps younger than I am, with very small fonts configured?

                                                                                                      I don’t know, you didn’t give your age. I’m currently 52, and my coworkers (back when I was in the office) often complained about the small font size I use (and have used).

                                                                                                      I suspect that the increase in bugs has to do with the change in process rather than the testing regime.

                                                                                                      The code (and it’s several different programs that comprise the whole thing) was not written with unit testing in mind (even though it was initially written in 2010, it’s in C89/C++98, and the developer who wrote it didn’t believe in unit tests). We do have a regression test that tests end-to-end [1] but there are a few cases that as of right now require manual testing [2], which I (as a dev) can do, but generally QA does a more in-depth testing. And I (or rather, we devs did, before the major change) work closely with the QA engineer to coordinate testing.

                                                                                                      And that’s just the testing regime. The development regime is also being forced changed.

                                                                                                      [1] One program to generate the data required, and another program that runs the eight programs required (five of which aren’t being tested but need to be endpoints our stuff talks to) and runs through 15,800+ tests we have (it takes around two minutes). It’s gotten harder to add tests to it (the regression test is over five years old) due to the nature of how the cases are generated (automatically, and not all cases generated are technically “valid” in the sense we’ll see it in production).

                                                                                                      [2] Our business logic module queries two databases at the same time (via UDP—they’re DNS queries), so how does one automate the testing of result A returns before result B, result B returns before result A, A returns but B times out, B returns and A times out? The new manager wants “push button testing”.

                                                                                                      1. 1

                                                                                                        [2] Our business logic module queries two databases at the same time (via UDP—they’re DNS queries), so how does one automate the testing of result A returns before result B, result B returns before result A, A returns but B times out, B returns and A times out? The new manager wants “push button testing”

                                                                                                        Here are three options, but there are many others:

                                                                                                        1. Separate the networking code from the business logic, test the business logic
                                                                                                        2. Have the business logic send to a test server running on localhost, have it send back results ordered as needed
                                                                                                        3. Change the routing configuration or use netfilter to rewrite the requests to a test server, have it send back results ordered as needed.

                                                                                                        Re-ordering results from databases is a major part of what Jepsen does; you could take ideas from there too.

                                                                                                        1. 1
                                                                                                          1. Even if that was possible (and I wish it was), I would still have to test the networking code to ensure it’s working, per the new regime.
                                                                                                          2. That’s what I’m doing
                                                                                                          3. I’m not sure I understand what you mean by “routing configuration”, but I do understand what “netfilter” is, and my response to that is—the new regime wants “push button testing,” and if there’s a way to automate that, then that is an option.
                                                                                                          1. 2
                                                                                                            1. Yes, of course the networking code would still need to be tested.

                                                                                                              Ideally, the networking code would have its own unit tests. And, of course, unit tests don’t replace integration tests. Test pyramid and such.

                                                                                                            2. 🚀

                                                                                                            3. netfilter can be automated. It’s an API.

                                                                                                            What’s push button testing?

                                                                                                            1. 1

                                                                                                              You want to test the program. You push a button. All the tests run. That’s it. Fully automated testing.

                                                                                                              1. 1

                                                                                                                👌🏾

                                                                                                                Everything I’ve worked on since ~2005 has been fully and automatically tested via continuous integration. IMHO it’s a game changer.

                                                                                                    2. 1

                                                                                                      Would love to hear about your prior development method. Did adopting the new practices have any upsides?

                                                                                                      1. 4

                                                                                                        First off, our stuff is a collection of components that work together. There are two front-end pieces (one for SS7 traffic, one for SIP traffic) that then talk to the back-end (that implements the business logic). The back-end makes parallel DNS queries [1] to get the required information, muck with the data according to the business logic, then return data to the front-ends to ultimately return the information back to the Oligarchic Cell Phone Companies. Since this process happens as a call is being placed we are on the Oligarchic Cell Phone Companies network, and we have some pretty short time constraints. And due to this, not only do we have some pretty severe SLAs, but any updates have to be approved 10 business days before deployment by said Oligarchic Cell Phone Companies. As a result, we might get four deployments per year [2].

                                                                                                        And the components are written in a combination of C89, C++98 [3], C99, and Lua [4].

                                                                                                        So, now that you have some background, our development process. We do trunk based development (all work done on one branch, for the most part). We do NOT have continuous deployment (as noted above). When working, we developers (which never numbered more than three) would do local testing, either with the regression test, or another tool that allows us to target a particular data configuration (based off the regression test, which starts eight programs, five of which are just needed for the components being tested). Why not test just the business logic? Said logic is spread throughout the back-end process, intermixed with all the I/O it does (it needs data from multiple sources, queried at the same time).

                                                                                                        Anyway, code is written, committed (main line), tested, fixed, committed (main line), repeat, until we feel it’s good. And the “tested” part not only includes us developers, but also QA at the same time. Once it’s deemed working (using both regression testing and manual testing), we then officially pass it over to QA, who walks it down the line from the QA servers, staging servers and finally (once we get permission from the Oligarchic Cell Phone Companies) into production, where not only devops is involved, but QA and the developer who’s code is being installed (at 2:00 am Eastern, Tuesday, Wednesday or Thursday, never Monday or Friday).

                                                                                                        Due to the nature of what we are dealing with, testing at all is damn near impossible (or rather, hideously expensive, because getting actual cell phone traffic through the lab environment involves, well, being a phone company (which we aren’t), very expensive and hard to get equipment, and a very expensive and hard to get laboratory setup (that will meet FCC regulations, blah blah yada yada)) so we do the best we can. We can inject messages as if they were coming from cell phones, but it’s still not a real cell phone, so there is testing done during deployment into production.

                                                                                                        It’s been a 10 year process, and it has gotten better until this past December.

                                                                                                        Now it’s all Agile, scrum, stories, milestones, sprints, and unit testing über alles! As I told my new manager, why bother with a two week sprint when the Oligarchic Cell Phone Companies have a two year sprint? It’s not like we ever did continuous deployment. Could more testing be done automatically? I’m sure, but there are aspects that are very difficult to test automatically [5]. Also, more branch development. I wouldn’t mind so much this, except we’re using SVN (for reasons that are mostly historical at this point) and branching is … um … not as easy as in git. [6] And the new developer sent me diffs to ensure his work passes the tests. When I asked him why didn’t he check the new code in, he said he was told by the new manager not to, as it could “break the build.” But we’ve broken the build before this—all we do is just fix code and check it in [8]. But no, no “breaking the build”, even though we don’t do continuous integration, nor continuous deployment, and what deployment process we do have locks the build number from Jenkins of what does get pushed (or considered “gold”).

                                                                                                        Is there any upside to the new regime? Well, I have rewritten the regression test (for the third time now) to include such features as “delay this response” and “did we not send a notification to this process”. I should note that is is code for us, not for our customer, which, need I remind people, is the Oligarchic Cell Phone Companies. If anyone is interested, I have spent June and July blogging about this (among other things).

                                                                                                        [1] Looking up NAPTR records to convert phone numbers to names, and another set to return the “reputation” of the phone number.

                                                                                                        [2] It took us five years to get one SIP header changed slightly by the Oligarchic Cell Phone Companies to add a bit more context to the call. Five years. Continuous deployment? What’s that?

                                                                                                        [3] The original development happened in 2010, and the only developer at the time was a) very conservative, b) didn’t believe in unit tests. The code is not written in a way to make it easy to unit test, at least, as how I understand unit testing.

                                                                                                        [4] A prototype I wrote to get my head around parsing SIP messages that got deployed to production without my knowing it by a previous manager who was convinced the company would go out of business if it wasn’t. This was six years ago. We’re still in business, and I don’t think we’re going out of business any time soon.

                                                                                                        [5] As I mentioned, we have multiple outstanding requests to various data sources, and other components that are notified on a “fire and forget” mechanism (UDP, but it’s all on the same segment) that the new regime want to ensure gets notified correctly. Think about that for a second, how do you prove a negative? That is, something that wasn’t supposed to happen (like a component not getting notified) didn’t happen?

                                                                                                        [6] I think we’re the only department left using SVN—the rest of the company has switched to git. Why are we still on SVN? 1) Because the Solaris [7] build servers aren’t configured to pull from git yet and 2) the only redeeming feature of SVN is the ability to checkout a subdirectory, which given the layout of our repository, and how devops want the build servers configured, is used extensively. I did look into using git submodules, but man, what a mess. It totally doesn’t work for us.

                                                                                                        [7] Oh, did I neglect to mention we’re still using Solaris because of SLAs? Because we are.

                                                                                                        [8] Usually, it’s Jenkins that breaks the build, not the code we checked in. Sometimes, the Jenkins checkout fails. Devops has to fix the build server [7] and try the call again.

                                                                                                        1. 2

                                                                                                          As a result, we might get four deployments per year [2]

                                                                                                          AIUI most agile practices are to decrease cycle time and get faster feedback. If you can’t, though, then you can’t! Wrong practices for the wrong context.

                                                                                                          I feel for you.

                                                                                                          1. 1

                                                                                                            Thank you! More grist for my “unit testing is fine in its place” mill.

                                                                                                            Also: hiring new management is super risky.

                                                                                                  1. 4

                                                                                                    Pleased to find that Sandstorm blocks this, by two different mechanisms (no /proc and no userns).

                                                                                                    user namespaces in particular introduced a massive amount of new attack surface, and when it came out there was a flurry of privilege escalation vulnerabilities related to it – all sorts of kernel interfaces that were previously only accessible by root (such as unmount() in this case) that all of a sudden are potentially exposed to untrusted users.

                                                                                                    1. 2

                                                                                                      Does Sandstorm block the attack or only the POC ?

                                                                                                      1. 2

                                                                                                        IIUC it should block all forms of the attack – this requires being able to access virtual files backed by seq_file, and we don’t expose any of those to apps – just disk files and /dev/{null,zero,{u,}random}

                                                                                                        Edit: we do bind-mount /proc/cpuinfo, which I’m not sure whether it’s backed by seq_file, but the app shouldn’t be able to affect its contents, so I would be very surprised if that changed anything.

                                                                                                      2. 2

                                                                                                        You’re correct, but I think it’s important to add that in the long term user namespaces will enhance security a great deal.

                                                                                                      1. 7

                                                                                                        As I’ve got to understand some time ago, common language is just a tool to communicate ideas between individuals. Not the place where perfection needs to be attained at all costs. That’s especially hard for us programmers who normally work in a context or almost supernatural perfection.

                                                                                                        Sure saying “HTTP API” is technically better but if saying that we are going to build a RESTful API makes the concepts clear to the whole team (from the new intern to the snarky senior), that’s what we will use.

                                                                                                        As the article presents, since APIs implemented exactly as presented in the original paper (“trulu” RESTful) call themselves something else than REST, that means that there is no really confusion to begin with.

                                                                                                        This meand that if someone nowadays says REST API they actually mean HTTP API, that’s it.And if someone says Hydra, HyperMedia or ChockoMangoTM API then the other person might simply say “can you show me what you mean?” and move from there.

                                                                                                        1. 9

                                                                                                          saying that we are going to build a RESTful API makes the concepts clear to the whole team

                                                                                                          I feel like it doesn’t. Most people just think of HTTP methods bolted on to any kind of API. If you say HTTP API, and maybe clarify by saying “yknow, an API designed to respond to HTTP messages”, it would be considerably clearer since there’ll be no conflict between people’s varying understandings of REST.

                                                                                                          1. 7

                                                                                                            Right, exactly. If someone says a “REST API” I don’t know if they actually mean REST as originally defined or if they’re just using it to mean ‘any API that uses HTTP as a transport’.

                                                                                                            1. 9

                                                                                                              I made this mistake once. I was asked to design a REST API, and so I did. Turns out they wanted RPC over http.

                                                                                                              1. 2

                                                                                                                Turns out they wanted RPC over http.

                                                                                                                There are a ton of folks who seem to think that REST == JSON-over-HTTP, and honestly don’t understand why RPC with JSON-over-HTTP isn’t RESTful.

                                                                                                                1. 2

                                                                                                                  LOL that made me dribble my drink i’m so sorry

                                                                                                                2. 2

                                                                                                                  Definitely in the minority there though.

                                                                                                                  Not to say that you are wrong but you are underestimating how many people actually think like that, at least from my own experience. This is supported even further by the fact that this post was even made in the first place.

                                                                                                            1. 10

                                                                                                              One of the common complaints about Lisp is that there are no libraries in the ecosystem. As you see, five libraries are used just in this example for such things as encoding, compression, getting Unix time, and socket connections.

                                                                                                              Wait are they really making an argument of “we used a library for getting the current time, and also for sockets” as if that’s a good thing?

                                                                                                              1. 16

                                                                                                                Lisp is older than network sockets. Maybe it intends to outlast them? ;)

                                                                                                                More seriously, Lisp is known for high-level abstraction and is perhaps even more general than what we usually call a general purpose language. I could see any concrete domain of data sources and effects as an optional addition.

                                                                                                                In the real world, physics constants are in the standard library. In mathematics, they’re a third party package.

                                                                                                                1. 12

                                                                                                                  Lisp is older than network sockets.

                                                                                                                  Older than time, too.

                                                                                                                  1. 1

                                                                                                                    Common Lisp is not older than network sockets, so the point is moot I think.

                                                                                                                    1. 1

                                                                                                                      I don’t think so. It seems to me that it was far from obvious in 1994 that Berkeley sockets would win to such an extent and not be replaced by some superior abstraction. Not to mention that the standard had been in the works for a decade at that point.

                                                                                                                  2. 5

                                                                                                                    Because when the next big thing comes out it’ll be implemented as just another library, and won’t result in ecosystem upheval. I’m looking at you, Python, Perl, and Ruby.

                                                                                                                    1. 4

                                                                                                                      Why should those things be in the stdlib?

                                                                                                                      1. 4

                                                                                                                        I think that there are reasons to not have a high-level library for manipulating time (since semantics of time are Complicated, and moving it out of stdlib and into a library means you can iterate faster). But I think sockets should be in the stdlib so all your code can have a common vocabulary.

                                                                                                                        1. 5

                                                                                                                          reasons to not have a high-level library for manipulating time

                                                                                                                          I actually agree with this; it’s extraordinarily difficult to do this correctly. You only have to look to Java for an example where you have the built-in Date class (absolute pants-on-head disaster), the built-in Calendar which was meant to replace it but was still very bad, then the 3rd-party Joda library which was quite good but not perfect, followed by the built-in Instant in Java 8 which was designed by the author of Joda and fixed the final few quirks in it.

                                                                                                                          However, “a function to get the number of seconds elapsed since epoch” is not at all high-level and does not require decades of iteration to get right.

                                                                                                                          1. 7

                                                                                                                            Common Lisp has (some) date and time support in the standard library. It just doesn’t use Unix time, so if you need to interact with things that use the Unix convention, you either need to do the conversion back and forth, or just use a library which implements the Unix convention. Unix date and time format is not at all universal, and it had its own share of problems back when the last version of the Common Lisp standard was published (1994).

                                                                                                                            It’s sort of the same thing with sockets. Just like, say, C or C++, there’s no support for Berkeley sockets in the standard library. There is some history to how and why the scope of the Common Lisp standard is the way that it is (it’s worth noting that, like C or C++ and unlike Python or Go, the Common Lisp standard was really meant to support independent implementation by vendors, rather than to formalize a reference implementation) but, besides the fact that sockets were arguably out of scope, it’s only one of the many networking abstractions that platforms on which Common Lisp runs support(ed).

                                                                                                                            We could argue that in 2021 it’s probably safe to say that BSD sockets and Unix timestamps have won and they might as well get imported in the standard library. But whether that’s a good idea or not, the sockets and Unix time libraries that already exist are really good enough even without the “standard library” seal of approval – which, considering that the last version of the standard is basically older than Spice Girls, doesn’t mean much anyway. Plus who’s going to publish another version of the Common Lisp standard?

                                                                                                                            To defend the author’s wording: their remark is worth putting into its own context – Common Lisp had a pretty difficult transition from large commercial packages to free, open source implementations like SBCL. Large Lisp vendors gave you a full on CL environment that was sort of on-par with a hosted version of a Lisp machine’s environment. So you got not just the interpreter and a fancy IDE and whatever, you also got a GUI toolkit and various glue layer libraries (like, say, socket libraries :-P). FOSS versions didn’t come with all these goodies and it took a while for FOSS alternatives to come up. But that was like 20+ years ago.

                                                                                                                            1. 2

                                                                                                                              However, “a function to get the number of seconds elapsed since epoch” is not at all high-level and does not require decades of iteration to get right.

                                                                                                                              GET-UNIVERSAL-TIME is in the standard. It returns a universal time, which is the number of seconds since midnight, 1 January 1900.

                                                                                                                              1. 2

                                                                                                                                Any language could ignore an existing standard and introduce their own version with its own flaws and quirks, but only Common Lispers would go so far as to call the result “universal”.

                                                                                                                              2. 1

                                                                                                                                However, “a function to get the number of seconds elapsed since epoch” is not at all high-level and does not require decades of iteration to get right.

                                                                                                                                Actually, it doesn’t support leap seconds so in that case the value repeats.

                                                                                                                              3. 1

                                                                                                                                Yeah but getting the current unix time is not Complicated, it’s just a call to the OS that returns a number.

                                                                                                                                1. 6

                                                                                                                                  What if you’re not running on Unix? Or indeed, on a system that has a concept of epoch? Note that the CL standard has its own epoch, unrelated (AFAIK) to OS epoch.

                                                                                                                                  Bear in mind that Common Lisp as a standard, and a language, is designed to be portable by better standards than “any flavour of Unix” or “every version of Windows since XP” ;-)

                                                                                                                                  1. 1

                                                                                                                                    Sure, but it’s possible they were using that library elsewhere for good reasons.

                                                                                                                                2. 3

                                                                                                                                  In general, I really appreciate having a single known-good library promoted to stdlib (the way golang does). Of course, there’s the danger that you standardise something broken (I am also a ruby dev, and quite a bit of the ruby stdlib was full of footguns until more-recent versions).

                                                                                                                                  1. 1

                                                                                                                                    Effectively that’s what happened though. The libraries for threading, sockets etc converged to de facto standards.

                                                                                                                              1. 9

                                                                                                                                Sure, martin@com is technically valid. Who uses that? No one. Things like multiple @s for routing or UUCP address have long since been deprecated. How many people have tried to register on your service with such an email address and failed? That number is most probably exactly zero.

                                                                                                                                Case sensitively was certainly a mistake. “Oops I emailed my email to glenn@ instead of Glenn@”. What a horrible idea, and everyone ignoring this part of the RFC is a good thing. Maybe Postfix can add RFCLY_CORRECT or RFC_ME_HARDER.

                                                                                                                                These RFCs aren’t stone tablets from the mountain; many of them are old, full of legacy cruft, and sometimes contain bad or even outright stupid ideas.

                                                                                                                                However, at the end of the day, it’s good to remember that your initial regex along the lines of [a-z0-9.-]+@[a-z0-9.-]+.[a-z0-9]+ is quite simply… wrong.

                                                                                                                                “Wrong” in what way? If the RFC is all you care about and don’t look at anything else, sure. But the world isn’t that simple. Someone registering an account on your service with an invalid email is annoying because you have no way to contact them, and you also don’t want to bother them with a mandatory email verification step (these sort of things tend to lower subscription rates, and thus cost real money). Guaranteeing that your email can actually be delivered is impossible, but we can at least make an attempt.

                                                                                                                                Things like not allowing IDN or UTF-8 is annoying and I wish we’d finally move to an UTF-8 world, but this too has their reasons; not everything accepts this (yet), and there are some security concerns with homoglyphs as well; martin@ and mаrtin@ are not the same address, and neither are märtin@ and märtin@, or martin@ and martin​@. You can probably do all sorts of creative things with various control characters as well.

                                                                                                                                So that regexp only accepts emails that are guaranteed to be correct, can be accepted by the entire pipeline now and in the future, and doesn’t suffer from some tricky potential security issues. There is some sense in that, and it’s not necessarily “wrong”.

                                                                                                                                Anyway, what’s really needed is a simplification of this entire circus. The complexities are actually far worse than what’s outlined in this article if you want to parse things like From: and To: headers.

                                                                                                                                1. 4

                                                                                                                                  So that regexp only accepts emails that are guaranteed to be correct, can be accepted by the entire pipeline now and in the future, and doesn’t suffer from some tricky potential security issues. There is some sense in that, and it’s not necessarily “wrong”.

                                                                                                                                  I get what you are saying, but that regexp also rejects things such as +, which makes as much sense to me as rejecting - or ., or q for that matter.

                                                                                                                                  1. 2

                                                                                                                                    Yeah, rejecting + is too strict.

                                                                                                                                1. 12

                                                                                                                                  Why does Fn belong on the left of Control?

                                                                                                                                  I feel the opposite. Doubly so if you’re running not-macOS. I use control much more frequently than Fn and it’s much easier to hit if it’s all the way at the end.

                                                                                                                                  1. 2

                                                                                                                                    Control being left of Fn means you need less reach to press it along with most other keys, which is particularly important a) for people with small hands (like me) and b) on a laptop where there isn’t always a control key on the right.

                                                                                                                                    I don’t mind needing to use two hands to use Fn + another key, but that would be annoying with control.

                                                                                                                                    1. 2

                                                                                                                                      Honestly, Control belongs immediately outboard of Space: it’s the most commonly-used modifier, so it should be typed with the strongest fingers. Then Alt, then Super and finally Hyper, with Function past that if present.

                                                                                                                                      1. 5

                                                                                                                                        On a Mac Command is used more frequently, and that is right next to space, but I agree on non-Macs it’d make sense for Control to be there.

                                                                                                                                    2. 1

                                                                                                                                      I prefer it on the right. It may be to do with hand size/shape or just bring used to it. What Lenovo gets right is making it configurable, so the placement is not really an issue.

                                                                                                                                    1. 1

                                                                                                                                      I disagree that pixels are the correct measure: pixels are different sizes on different devices. At the end of the day, every font is registered on an eyeball, and that size is what matters.

                                                                                                                                      But I can’t imagine that specifying fonts in seconds of arc is going to take off.

                                                                                                                                      As a fallback, I would like fonts to be (roughly) the same size of my monitors, so I think specifying sizes in real points (i.e., in real fractions of an inch) is appropriate.

                                                                                                                                      1. 7

                                                                                                                                        Yes, I am in a similar lost state as well. I’ve written about this from the “shell” perspective here.

                                                                                                                                        I do want to invest my time into an editor/shell platform, but all existing ones provide only pieces of what I need. Emacs has the right “is an OS” mindset, UI idioms and daemon mode, VS Code has a dependable extension ecosystem and adequate configuration workflow, IntelliJ has the right amount of out-of-he-box polish and attention to details. None have a reasonable extension language :)

                                                                                                                                        My current solace is that I work on an LSP server, so, if an ideal editor is to be created, I’ll be able to use it. Even than, I am not completely sold on the LSP approach, and still have doubts that maybe IntelliJ-like polyglot IDE brain is a better approach overall.

                                                                                                                                        1. 5

                                                                                                                                          None have a reasonable extension language :)

                                                                                                                                          What would you say makes a good extension language? I am personally fond of Elisp, but that is probably also because I use it regularly.

                                                                                                                                          1. 8
                                                                                                                                            • dynamic, open world runtime (as you want to eval user code and load plugins)
                                                                                                                                            • reasonably fast runtime (sometimes the amount of data to process might be substantial)
                                                                                                                                            • statically/gradually typed (so that’s it’s possible to autocomplete your way through dotfiles without knowing stuff up front. This might be possible to achieve using smalltalk-like dynamic introspection instead of static analysis based approach. I am betting on types though, as I’ve never experience “discoverability via autocomplete” in elisp or clojure in practice).
                                                                                                                                            • support for modularity, so that diamond dependencies are not a problem (Julia is interesting as dynamic language which has this)
                                                                                                                                            • reasonable semantics and built in idioms for grunt work tasks. Things like “no two nulls” and “reasonable collections and iterators”
                                                                                                                                            • big ?: monkey-patchability, a-la Emacs advice system. That’s definitelly a plus for each isolated user, but I don’t know if that’s good or bad for the ecosystem as a whole, as it goes directly against modularity.
                                                                                                                                            • big ?: DSL capabilities? The current fashion (Swift, Kotlin, Dart) is that UI should be described in code, mostly as data, and that it is important enough to have special support in the language/compiler.
                                                                                                                                            1. 4

                                                                                                                                              I think Common Lisp meets all these points well (some might disagree on iterators and sequences). Only trouble is that GNU Emacs hasn’t been rewritten in it yet!

                                                                                                                                              1. 3

                                                                                                                                                No, but Lem is developed: https://github.com/lem-project/lem/ (ping @Moonchild)

                                                                                                                                                1. 3

                                                                                                                                                  Emacs hasn’t been rewritten in it yet!

                                                                                                                                                  https://github.com/robert-strandh/second-climacs (eventually, maybe . . .)

                                                                                                                                            2. 1

                                                                                                                                              Even than, I am not completely sold on the LSP approach, and still have doubts that maybe IntelliJ-like polyglot IDE brain is a better approach overall.

                                                                                                                                              Can you expand on what this approach is? Is it just the idea of having all the parsing and semantic analysis be performed in-process, or is it something more precise? Very curious about this.

                                                                                                                                              1. 1

                                                                                                                                                The main thing is using the same data structures for different programming languages. Sort-of LLVM for front ends.

                                                                                                                                                In IntelliJ, you can use the same code to process Rust and Java syntax trees. This results in some code-reuse, and somewhat better user experience for polyglot codebases, as its relatively easy to implement cross references between different languages.

                                                                                                                                                But front ends differ much more than backends, so, unlike LLVM, IntelliJ isn’t exactly a holly grail of code reuse.

                                                                                                                                                1. 3

                                                                                                                                                  The main thing is using the same data structures for different programming languages. Sort-of LLVM for front ends.

                                                                                                                                                  Ah, I see, kind of like TreeSitter but with additional semantic information? I think Langkit kind of matches this description too, but the only end user for that is the Ada language.

                                                                                                                                                  1. 3

                                                                                                                                                    Yup. Off the top of my head, IntelliJ provides the following in a reasonably language agnostic fashion:

                                                                                                                                                    • parsers and syntax trees (basically, tree sitter)
                                                                                                                                                    • file indexing infrastructure
                                                                                                                                                    • tree stub indexing infrastructure (the effectiveness of it does depend on language semantics)
                                                                                                                                                    • code formatters
                                                                                                                                                    • basic IDE features, which depend only on the syntax tree
                                                                                                                                                    • approximate reference search infrastructure
                                                                                                                                                    • an API to GUI parts of ide, so just you don’t have to re-invent a million small things like associating icons with specific kinds of syntax nodes.

                                                                                                                                                    Note that actual “language semantics” bits are implemented by each language separately, there are common patterns, but little common code there. The exception are Java-like languages (Java, Kotlin and maybe Groovy) which have a “unified ast” framework for implementing polyglot refactors.

                                                                                                                                            1. 6

                                                                                                                                              I hope this might lead to a relicensing of Plan9 to a more liberal license than GPL. Many forks already did that maybe a newer license could bring back some unity

                                                                                                                                              1. 29
                                                                                                                                                1. 7

                                                                                                                                                  Why would that matter? Linux is under the GPL and it seems to do just fine. In fact, for an operating system it feels like the GPL might be the best option to prevent fragmentation and incompatibility.

                                                                                                                                                  1. 12

                                                                                                                                                    The problem is that several Plan 9 forks license their contributions under a different license so up until now there’s no unity and that kinda sucks to check which part of the code is under which license

                                                                                                                                                    1. 2

                                                                                                                                                      Oh, gotcha, that’s unfortunate!

                                                                                                                                                  2. 3

                                                                                                                                                    I don’t think that would help anything: folks who refuse to contribute their changes to the rest of the community (as required by the GPL) are unlikely to contribute those changes to the rest of the community when no longer so required.

                                                                                                                                                    Sure, things like the BSDs exist, but their ecosystem is far, far smaller than the GNU ecosystem.

                                                                                                                                                    1. 3

                                                                                                                                                      That would be an issue for a usual open source project. The status of the plan 9 and its forks is: there are forks, with the plural ‘s’. Allow me to rephrase your statement: folks who refuse others’ changes are unlikely to accept those changes no matter what license they use.

                                                                                                                                                  1. 7

                                                                                                                                                    pass is great. I use it and I love it. It does, however, lack one key feature: the password filenames are not encrypted. This does leak information should a malicious actor access the password repo. pass-tomb and pass-code attempt to solve this, in different ways.

                                                                                                                                                    1. 2

                                                                                                                                                      Another interesting bit about the encrypted files is that the file size can tell you something about the passphrase length, assuming the file contains nothing but the passphrase.

                                                                                                                                                      Adding some extra data at the end of the file of random length is one way to work around it.

                                                                                                                                                      1. 2

                                                                                                                                                        I’ve previously had my .password-store in a Cryptomator vault and then put that on Dropbox for sync. This solves the filename problem. Given that I no longer put those files on a machine not in my control, it’s a fair enough tradeoff for now. Let’s hope I’m not wrong.