Threads for rakiru

  1. 6

    I’ve been a Python programmer for most of my career, but have primarily been using Typescript at work for the past few years. Any time the comparison of TS and Mypy comes up, someone will say that Mypy’s just not powerful enough, or has problems, and someone else will defensively ask for examples. I can never come up with examples, despite my experience heavily leaning towards “TS mostly works, Mypy’s mostly a toy”, but things like the mentioned lru_cache issue are a perfect example. Things like @overload feel really clumsy to use and add a lot of clutter to otherwise easy-to-scan code. As such, every time I try seriously typing a project again, I bounce off the idea real quick.

    I’m glad it’s maturing, and key packages like numpy and pandas starting to get type hints is an important step, but going full-in on Mypy still just doesn’t seem worth the effort yet. There’s a good point in the thread about it encouraging avoiding “clever code”, but in my experience, it encourages me to avoid most things that make me use Python in the first place.

    1. 6

      Finally, I can experience the weekly pre-pandemic experience of finding out why I’m stranded in Perth for 3 hours this time, in the comfort of my own home!

      I always enjoy how-I-did-a-thing posts as an intro into a new tool. I’ve seen plenty of datasette projects before, but never an explanation on how it actually gets used.

      1. 4

        I think this whole situation shows why creating native games for Linux is challenging.

        Just release the code and only lock down the assets. Release the assets couple years after. Congratulations, your game will most probably live forever.

        1. 5

          Okay but how does that solve anything? Your players aren’t going to download the source and build it themselves; they’re going to get it through Steam, which is a binary distribution platform.

          1. 3

            Steam also takes on the compatibility problem. If you build for Steam, they promise their own compatibility guarantees. I have no idea what they are in relation to DT_HASH, but I’m sure they have one.

            1. 4

              I doubt they need one. For DT_HASH to be a problem, you need:

              • To have not relinked your program in the last 10 years, or to have relinked it explicitly with non-default linker flags, and
              • To be dynamically linking to a linking to a library that has switched from both to gnu.

              In general, I’d expect a platform like Steam to ship things as something that look a bit like container images, even if they’re not actually containers: they depend on the system call interface and nothing else, any shared libraries are shipped along with it. If you ship a binary that either statically links libc or which uses -rpath and bundles its own dynamically linked libc, then you don’t have these problems.

              It’s worth noting that Xbox games are distributed as VM images with specific versions of Windows libraries to avoid cases where a Windows library uses 1 MiB more RAM and causes the game to start swapping. A Linux system could very easily ship games as OCI containers and get similar guarantees.

              1. 1

                Agreed. I know Steam ships it’s own libc libraries, but I don’t know any more than that, I’ve never looked into it. Awesome how Xbox does it! that’s really interesting, thanks for the info!

            2. 3

              Also, the main example in the article is an anti-cheat system, which almost by definition has to be distributed as an opaque binary and not source code.

              1. 2

                Steam has done a great job at getting games onto Linux, but the solutions it provides do not work for games bought outside their store.

                If I want to play a game from itch.io on windows it’s easy. On Linux it’s almost always a faff.

                This is an issue because steam is not a good marketplace for all games and because monopolies are bad.

              2. 1

                Releasing EAC source code is literally (!!!) the worst possible solution, because the entire point of the anti-cheat system is to obfuscate the source code to prevent players from secretly modifying their client to cheat.

              1. 13

                using APIs that were designed to simply parse /etc/hosts and had DNS support shoehorned into them will always deliver unreliable results.

                I really don’t like that conclusion. It seems to imply we can never improve or extend behaviour beyond what was originally planned. And it comes from someone writing on the military experiment ARPANET, likely serving the content from a toy reimagining of Minix. That position is: It was not reliable in the past so we shouldn’t ever try to improve things, even though almost every single application in the wild assumes reliability.

                Let’s not ignore reality. Let’s improve things where we can.

                1. 6

                  That’s a pretty broad take away from a fairly narrow statement though, isn’t it? It specifically says “APIs”, which in this case, can’t really be improved on or extended. The examples you give are more akin to creating a new, improved API based on the original, which is closer to the lines of what the author is arguing.

                1. 3

                  The major JS engines also do the latin1 optimization, partially for space, but also performance.

                  1. 2

                    Python as of 3.3 does something similar: all strings in Python have fixed-width storage in memory, because the choice of how to encode a string in memory is done per-string-object, and can choose between latin-1, UCS-2, or UCS-4.

                    1. 7

                      Before 3.3, Python would have to be compiled for either UCS-2, or UCS-4, leading to hilarious “it works on my machine”-bugs.

                      And let’s not forget MySQL, which has a utf8 encoding that somehow only understands the basic multilingual plane, and utf8mb4, which is real utf-8.

                      1. 7

                        And let’s not forget MySQL, which has a utf8 encoding that somehow only understands the basic multilingual plane, and utf8mb4, which is real utf-8.

                        The more I hear about MySQL the more scared I get. Why is anyone using it still?

                        1. 5

                          Because once upon a time it was easier than PostgreSQL to get started with, and faster in its default, hilariously bad configuration (you could configure it not to be hilariously bad, but then its performance was worse).

                          And then folks just continued using it, because it was the thing they used.

                          I still cringe when I see a project which supports MySQL, or worse only MySQL, but it is a mostly decent database today, if you know what you are doing and how to avoid its pitfalls.

                          1. 1

                            I still cringe when I see a project which supports MySQL, or worse only MySQL, but it is a mostly decent database today, if you know what you are doing and how to avoid its pitfalls.

                            I’ve probably only heard of MySQL’s warts and footguns, and little of it’s merits. On the other hand, I’ve self-hosted wordpress for a great number of years so It Has Worked On My Machine(tm).

                          2. 4

                            Because you’re hearing about the warts, it’s just legacy and now deprecated stuff they didn’t change for all the people that don’t want a broken system. Otherwise it works perfectly fine.

                            Edit: You could probably ask the same about windows, looking at WTF-8

                            1. 2

                              Legacy and/or confusion

                              1. 1

                                I’m no fan of MySQL, but Postgres also has some awful warts. Today I found a query that took 14s as planned by the planner or 0.2s if I turned off nested loop joins. There’s no proper way to hint that for that join, I have to turn off nested loops for the whole query.

                              2. 3

                                Another thing about pre-3.3 Python is that “narrow” (UCS-2) builds broke the abstraction of str being a sequence of code points; instead it was a sequence of code units, and exposed raw surrogates to the programmer (the same way Java, JavaScript, and other UTF-16-based languages still commonly do).

                                1. 2

                                  basic multilingual

                                  It’s still 3 bytes, making for even more fun as it first looks like its working.

                                2. 2

                                  Interesting. Why did they not choose UTF-8 instead of latin-1?

                                  1. 2

                                    The idea is to only use it for strings that can be represented by one-byte characters, so UTF-8 doesn’t gain you anything there. In fact, UTF-8 can only represent the first 128 characters with one byte, whereas latin-1 will obviously represent the full 256 characters in that one byte (although whether CPython in particular still uses latin1 for \u007F-\u00FF, I’m not sure - it’s a little more complicated internally due to compat with C extensions and such).

                                    1. 2

                                      Like the other commenter said: efficiency. UTF-8 (really, ASCII at that point) in the one-byte range only uses 7 bits and so can only encode the first 128 code points as one byte, while latin-1 uses the full 8 bits and can encode 256 code points as one byte, giving you a bigger range (for Western European scripts) of code points that can be represented in one-byte encoding.

                                      1. 1

                                        Because it’s easier to make UTF-16 from latin-1 than UTF-8. Latin-1 maps 1:1 to the first 256 codepoints, so you just insert zero every other byte. UTF-8 requires bit-twiddling.

                                        And these engines can’t just use UTF-8 for everything, because constant-time indexing into UTF-16 code units (and surrogates) has been accidentally exposed in public APIs.

                                  1. 1

                                    Care to explain what all the weird “honeypot” links are?

                                    I should make that clearer:

                                    https://code.rosaelefanten.org/groffstudio/dir?ci=tip

                                    has a list of “Files in the top-level directory from the latest check-in” which all link to:

                                    https://code.rosaelefanten.org/groffstudio/honeypot

                                    In fact all the repo’s there have similar honeypot links.

                                    1. 3

                                      That is a Fossil feature. It probably thinks you are a bot.

                                      1. 3

                                        I ran into the same issue - looks like it needs JS to make the links actually work. The Fossil docs give their explanation. It seems a pretty weak justification in my eyes, but there it is.

                                        1. 1

                                          You can just use command-line Fossil though. :)

                                          1. 1

                                            I have disabled the option (hopefully).

                                        1. 4

                                          Subclassing is a bad API mechanism. This would be much better if it used class decorators instead. Subclassing as mechanism means you can’t use normal subclassing. It’s a mistake in API design that people should have learned not to do by now.

                                          1. 4

                                            Would you be willing to explain this further with examples or additional reading material? I’m a beginner software developer, so what you said is not immediately grok’able unfortunately.

                                            1. 9

                                              The way that miniboss works can be seen by reading https://github.com/afroisalreadyinu/miniboss/blob/main/miniboss/services.py. miniboss.Service is has a metaclass miniboss.ServiceMeta. When you subclass from Service, your class inherits the metaclass ServiceMeta. ServiceMeta has a __new__ method that raises various exceptions if a subclass is configured incorrectly. The ServiceCollection class has a load_definitions method that looks at Service, finds all its subclasses, and registers them with miniboss.

                                              Basically all of this magic is unnecessary complication. Instead there should be a decorator called miniboss.register_service. It could be used like:

                                              @miniboss.register_service
                                              class Database:
                                                  name = "appdb"
                                                  image = "postgres:10.6"
                                                  env = {"POSTGRES_PASSWORD": "dbpwd",
                                                         "POSTGRES_USER": "dbuser",
                                                         "POSTGRES_DB": "appdb" }
                                                  ports = {5432: 5433}
                                              

                                              (Or if you need to pass options when registering, it could be @miniboss.register_service(some_option=True).)

                                              miniboss.register_service could just be a totally normal function that looks at the class, makes sure it is correct, and adds it to the list of registered classes.

                                              Doing it this way A) is radically simpler to follow than jumping around from class to metaclass to whatever B) lets you use subclassing in the normal way, like

                                              @miniboss.register_service
                                              class Database:
                                                  name = "appdb"
                                                  image = "postgres:10.6"
                                                  env = {"POSTGRES_PASSWORD": "dbpwd",
                                                         "POSTGRES_USER": "dbuser",
                                                         "POSTGRES_DB": "appdb" }
                                                  ports = {5432: 5433}
                                              
                                              @miniboss.register_service
                                              class SpecialDatabase(Database):
                                                  name = "special"
                                              
                                              class BaseClassThatIsNotRegistered:
                                                  image = "something"
                                              
                                              @miniboss.register_service
                                              class OtherService(BaseClassThatIsNotRegistered):
                                                 name = "otherservice"
                                              

                                              Class decorators are simpler to write, simpler to use, and more extensible than subclassing/metaclasses. Django has the excuse that its ORM is older than class decorators so it had to use subclass magic, but no new systems should be created using metaclasses. They are an unnecessary headache for everyone involved.

                                              1. 8

                                                Hi, I’m the author of miniboss. You’re right, this is a much better idea. I might go ahead and implement it for the next version. Thanks for the feedback.

                                                1. 3

                                                  That’s great! ❤️

                                                2. 2

                                                  Thank you so much for the effort you took to reply! I’m also so glad to see the author taking your advice as well.

                                              2. 1

                                                Not that I necessarily disagree in general, but what’s stopping you from subclassing in this case? Especially since it’s Python, where multiple inheritance is a thing. It also doesn’t look like the kind of class you would put much, if anything, of your own on, more just a convenient structure to hold a definition and some optional Service-defined methods.

                                                1. 2

                                                  See https://lobste.rs/s/n2tmve/miniboss_versatile_local_container#c_chyrjr

                                                  Multiple inheritance is a thing. It is a confusing, error prone thing in which the order that you put your subclasses in the class definition has hard to predict effects on what gets run when and where. I used it enough in Django to never want to use it again.

                                              1. 10

                                                While I’m not a fan of the new logo (I struggle to imagine how I’d make it even more generic), being able to ditch my janky dark mode userstyle is great. It would be nice if it could extend to the preview iframes somehow, but that’s a minor complaint.

                                                1. 13

                                                  I have never understood why KDE isn’t the default VM for any serious linux distribution. It feels so much more professional than anything else.

                                                  Every time I see it, it makes me want to run Linux on the desktop again.

                                                  1. 11

                                                    I suspect because:

                                                    1. IIRC Gnome has a lot more funding/momentum
                                                    2. Plasma suffers from a lot of papercuts

                                                    Regarding the second reason: Plasma overall looks pretty nice, at least at first glance. Once you start using it, you’ll notice a lot of UI inconsistencies (misaligned UI elements, having to go through 15 layers of settings, unclear icons, applications using radically different styles, etc) and rather lackluster KDE first-party applications. Gnome takes a radically different approach, and having used both (and using Gnome currently), I prefer Gnome precisely because of its consistency.

                                                    1. 14

                                                      There’s also a lot of politics involved. Most of the Linux desktop ecosystem is still driven by RedHat and they employ a lot of FSF evangalists. GNOME had GNU in its name and was originally created because of the FSF’s objections to Qt (prior to its license change) and that led to Red Hat preferring it.

                                                      1. 6

                                                        Plus GNOME and all its core components are truly community FLOSS projects, whereas Qt is a corporate, for-profit project which the Qt company happens to also provide as open source (but where you’re seriously railroaded into buying their ridiculously expensive licenses if you try to do anything serious with it or need stable releases).

                                                        1. 7

                                                          No one ever talks about cinnamon mint but I really like it. It looks exactly like all the screenshots in the article. Some of the customisation is maybe a little less convenient but I have always managed to get things looking exactly how I want them to and I am hardly a linux power user (recent windows refugee). Given that it seems the majority of arguments for plasma are that it is more user friendly and easier to customise, I would be interested to hear people’s opinions on cinnamon vs plasma. I had mobile plasma on my pinephone for a day or two but it was too glitchy and I ended up switching to Mobian. This is not a criticism of plasma, rather an admission that I have not really used it and have no first hand knowledge.

                                                          1. 7

                                                            I have not used either in anger but there’s also a C/C++ split with GTK vs Qt-based things. C is a truly horrible language for application development. Modern C++ is a mediocre language for application development. Both have some support for higher-level languages (GTK is used by Mono, for example, and GNOME also has Vala) but both are losing out to things like Electron that give you JavaScript / TypeScript environments and neither has anything like the developer base of iOS (Objective-C/Swift) or Android (Java/Kotlin).

                                                            1. 4

                                                              As an unrelated sidenote, C is also a decent binding language, which matters when you are trying to use one of those frameworks from a language that is not C/C++. I wish Qt had a well-maintained C interface.

                                                              1. 8

                                                                I don’t really agree there. C is an adequate binding language if you are writing something like an image decoder, where your interface is expressed as functions that take buffers. It’s pretty terrible for something with a rich interface that needs to pass complex types across the boundary, which is the case for GUI toolkits.

                                                                For example, consider something like ICU’s UText interface, for exposing character storage representations for things like regex matching. It is a C interface that defines a structure that you must create with a bunch of callback functions defined as function pointers. One of the functions is required to set up a pointer in the struct to contain the next set of characters, either by copying from your internal representation into a static buffer in the structure or providing a pointer and setting the length to allow direct access to a contiguous run of characters in your internal representation. Automatically bridging this from a higher-level language is incredibly hard.

                                                                Or consider any of the delegate interfaces in OpenStep, which in C would be a void* and a struct containing a load of function pointers. Bridging this with a type-safe language is probably possible to do automatically but it loses type safety at the interfaces.

                                                                C interfaces don’t contain anything at the source level to describe memory ownership. If a function takes a char*, is that a pointer to a C string, or a pointer to a buffer whose length is specified elsewhere? Is the callee responsible for freeing it or the caller? With C++, smart pointers can convey this information and so binding generators can use it. Something like SWIG or Sol3 can get the ownership semantics right with no additional information.

                                                                Objective-C is a much better language for transparent bridging. Python, Ruby, and even Rust can transparently consume Objective-C APIs because it provides a single memory ownership model (everything is reference counted) and rich introspection functionality.

                                                                1. 2

                                                                  Fair enough. I haven’t really been looking at Objective-C headers as a binding source. I agree that C’s interface is anemic. I was thinking more from an ABI perspective, ie. C++ interfaces tend to be more reliant on inlining, or have weird things like exceptions, as well as being totally compiler dependent. Note how for instance SWIG still generates a C interface with autogenerated glue. Also the full abi is defined in like 15 pages. So while it’s hard to make a high-level to high-level interface in C, you can manually compensate from the target language; with C++ you need a large amount of compiler support to even get started. Maybe Obj-C strikes a balance there, I haven’t really looked into it much. Can you call Obj-C from C? If not, it’s gonna be a hard sell to a project as a “secondary api” like llvm-c, because you don’t even get the larger group of C users.

                                                                  1. 6

                                                                    Also the full abi is defined in like 15

                                                                    That’s a blessing and a curse. It’s also an exaggeration, the SysV x86-64 psABI is 68 pages. On x86-32 there are subtle differences in calling convention between Linux, FreeBSD, and macOS, for example, and Windows is completely different. Bitfields are implementation dependent and so you need to either avoid them or understand what the target compiler does. All of this adds up to embedding a lot of a C compiler in your other language, or just generating C and delegating to the C compiler.

                                                                    Even ignoring all of that, the fact that the ABI is so small is a problem because it means that the ABI doesn’t fully specify everything. Yes, I can look at a C function definition and know from reading a 68-page doc how to lower the arguments for x86-64 but I don’t know anything about who owns the pointers. Subtyping relationships are not exposed.

                                                                    To give a trivial example from POSIX, the connect function takes three arguments: int, const struct sockaddr, and socklen_t. Nothing in this tells me:

                                                                    • That the second argument is never actually a pointer to a sockaddr structure, it is a pointer to some other structure that starts with the same fields as the sockaddr.
                                                                    • That the third argument must be the size of the real structure that I point to with the second argument.
                                                                    • That the second parameter is not captured and I remain responsible for freeing it (you could assume this from const and you’d be right most of the time).
                                                                    • That the first parameter is not an arbitrary integer, it must be a file descriptor (and for it to actually work, that file descriptor must be a socket).

                                                                    I need to know all of these things to be able to bridge from another language. The C header tells me none of these.

                                                                    Apple worked around a lot of these problems with CoreFoundation by adding annotations that basically expose the Objective-C object and ownership model into C. Both Microsoft and Apple worked around it for their core libraries by providing IDL files (in completely different formats) that describe their interfaces.

                                                                    So while it’s hard to make a high-level to high-level interface in C, you can manually compensate from the target language; with C++ you need a large amount of compiler support to even get started

                                                                    You do for C as well. Parsing C header files and extracting enough information to be able to reliably expose everything with anything less than a full C compiler is not going to work and every tool that I’ve seen that tries fails in exciting ways. But that isn’t enough.

                                                                    In contrast, embedding something like clang’s libraries is sufficient for bridging a modern C++ or Objective-C codebase because all of the information that you need is present in the header files.

                                                                    Can you call Obj-C from C?

                                                                    Yes. Objective-C methods are invoked by calling objc_msgSend with the receiver as the first parameter and the selector as the second. The Objective-C runtime provides an API for looking up selectors from their name. Many years ago, I wrote a trivial libClang tool that took an Objective-C header and emitted a C header that exposed all of the methods as static inline functions. I can’t remember what I did with it but it was on the order of 100 lines of code, so rewriting it would be pretty trivial.

                                                                    If not, it’s gonna be a hard sell to a project as a “secondary api” like llvm-c, because you don’t even get the larger group of C users.

                                                                    There are fewer C programmers than C++ programmers these days. This is one of the problems that projects like Linux and FreeBSD have attracting new talent: the intersection between good programmers and people who choose C over C++ is rapidly shrinking and includes very few people under the age of 35.

                                                                    LLVM has llvm-c for two reasons. The most important one is that it’s a stable ABI. LLVM does not have a policy of providing a stable ABI for any of the C++ classes. This is a design decision that is completely orthogonal to the language. There’s been discussion about making llvm-c a thin (machine-generated) wrapper around a stable C++ interface to core LLVM functionality. That’s probably the direction that the project will go eventually, once someone bothers to do the work.

                                                                    1. 1

                                                                      I’ve been discounting memory management because it can be foisted off onto the user. On the other hand something like register or memory passing or how x86-64 uses SSE regs for doubles cannot be done by the user unless you want to manually generate calling code in memory.

                                                                      You do for C as well. Parsing C header files and extracting enough information to be able to reliably expose everything with anything less than a full C compiler is not going to work and every tool that I’ve seen that tries fails in exciting ways. But that isn’t enough.

                                                                      Sure but there again you can foist things off onto the user. For instance, D only recently gained a proper C header frontend; until now it got along fine enough by just manually declaring extern(C) functions. I believe JNI and CFFI do the same. It’s annoying but it’s possible, which is more than can be said for many C++ bindings.

                                                                      There are fewer C programmers than C++ programmers these days.

                                                                      I meant C as a secondary API, ie. C++ as primary then C as auxiliary, as opposed to Objective-C as auxiliary.

                                                                      Yes. Objective-C methods are invoked by calling objc_msgSend with the receiver as the first parameter and the selector as the second. The Objective-C runtime provides an API for looking up selectors from their name.

                                                                      I don’t know the requirements for deploying with the ObjC runtime. Still, nice!

                                                                      1. 2

                                                                        I’ve been discounting memory management because it can be foisted off onto the user.

                                                                        That’s true only if you’re bridging two languages with manual memory management, which is not the common case for interop. If you are exposing a library to a language with a GC, automatic reference counting, or ownership-based memory management then you need to handle this. Or you end up with an interop layer that everyone hates (e.g JNI).

                                                                        Sure but there again you can foist things off onto the user. For instance, D only recently gained a proper C header frontend; until now it got along fine enough by just manually declaring extern(C) functions. I believe JNI and CFFI do the same. It’s annoying but it’s possible, which is more than can be said for many C++ bindings.

                                                                        Which works for simple cases. For some counterexamples, C has _Complex types, which typically follow different rules for argument passing and returning to structures of the same layout (though they sometimes don’t, depending on the ABI). Most languages don’t adopt this stupidity and so you need to make sure that your custom C parser can express some C complex type. The same applies if you want to define bitfields in C structures in another language, or if the C structure that you’re exposing uses packed pagmas or attributes, uses _Alignas, and so on. There’s a phenomenal amount of complexity that you can punt on if you want to handle only trivial cases, but then you’re using a very restricted subset of C.

                                                                        JNI doesn’t allow calling arbitrary C functions, it requires that you write C functions that implement native methods on a Java object. This scopes the problem such that the JVM needs to be able to handle calling only C functions that use Java types (8 to 64-bit signed integers or pointers) as arguments return values. These can then call back into the JVM to access fields, call methods, allocate objects, and so on. If you want to return a C structure into Java then you must create a buffer to store it and an object that owns the buffer and exposes native methods for accessing the fields. It’s pretty easy to use JNI to expose Java classes into other languages that don’t run in the JVM, it’s much harder to use it to expose C libraries into Java (and that’s why everyone who uses it hates it).

                                                                        I meant C as a secondary API, ie. C++ as primary then C as auxiliary, as opposed to Objective-C as auxiliary.

                                                                        If you have a stable C++ API, then bridging C++ provides you more semantic information for your compat layer than a C wrapper around the stable C++ API would. Take a look at Sol3 for an example: it can expose C++ objects directly into Lua, with correct memory management, without any C wrappers. C++ libraries often conflate a C API with an ABI-stable API but this is not necessary.

                                                                        I don’t know the requirements for deploying with the ObjC runtime. Still, nice!

                                                                        The requirements for the runtime are pretty small but for it to be useful you want a decent implementation of at least the Foundation framework, which provides types like arrays, dictionaries, and strings. That’s a bit harder.

                                                                        1. 2

                                                                          I don’t know. I feel like you massively overvalue the importance of memory management and undervalue the importance of binding generation and calling convention compatibility. For instance, as far as I can tell sol3 requires manual binding of function pointers to create method calls that can be called from Lua. From where I’m standing, I don’t actually save anything effort-wise over a C binding here!

                                                                          Fair enough, I didn’t know that about JNI. But that’s actually a good example of the notion that a binding language needs to have a good semantic match with its target. C has an adequate to poor semantic match on memory management and any sort of higher-kinded functions, but it’s decent on data structure expressiveness and very terse, and it’s very easy to get basic support working quick. C++ has mangling, a not just platform-dependent but compiler-dependent ABI with lots of details, headers that often use advanced C++ features (I’ve literally never seen a C API that uses _Complex - or bitfields) and still probably requires memory management glue.

                                                                          Remember that the context here was Qt vs GTK! Getting GTK bound to any vaguely C-like language (let’s say any language with a libc binding) to the point where you can make calls is very easy - no matter what your memory management is. At most it makes it a bit awkward. Getting Qt bound is an epic odyssey.

                                                                          1. 4

                                                                            I feel like you massively overvalue the importance of memory management and undervalue the importance of binding generation and calling convention compatibility

                                                                            I’m coming from the perspective of having written interop layers for a few languages at this point. Calling conventions are by far the easiest thing to do. In increasing levels of difficulty, the problems are:

                                                                            • Exposing functions.
                                                                            • Exposing plain data types.
                                                                            • Bridging string and array / dictionary types.
                                                                            • Correctly managing memory between two languages.
                                                                            • Exposing general-purpose rich types (things with methods that you can call).
                                                                            • Exposing rich types in both directions.

                                                                            C only seems easy because C<->C interop requires a huge amount of boilerplate and so C programmers have a very low bar for what ‘transparent interoperability’ means.

                                                                            For instance, as far as I can tell sol3 requires manual binding of function pointers to create method calls that can be called from Lua. From where I’m standing, I don’t actually save anything effort-wise over a C binding here!

                                                                            It does, because it’s an EDSL in C++, but that code could be mechanically generated (and if reflection makes it into C++23 then it can be generated from within C++). If you pass a C++ shared_ptr<T> to Sol3, then it will correctly deallocate the underlying object once neither Lua nor C++ reference it any longer. This is incredibly important for any non-trivial binding.

                                                                            Remember that the context here was Qt vs GTK! Getting GTK bound to any vaguely C-like language (let’s say any language with a libc binding) to the point where you can make calls is very easy - no matter what your memory management is.

                                                                            Most languages are not ‘vaguely C-like’. If you want to use GTK from Python, or C#, how do you manage memory? Someone has had to write bindings that do the right thing for you. From my vague memory, it uses GObject, which uses C macros to define objects and to manage reference counts. This means that whoever manages the binding layer has had to interop with C macros (which are far harder to get to work than C++ templates - we have templates working for the Verona C++ interop layer but we’re punting on C macros for now and will support a limited subset of them later). This typically requires hand writing code at the boundary, which is something that you really want to avoid.

                                                                            Last time I looked at Qt, they were in the process of moving from their own smart pointer types to C++11 ones but in both cases as long as your binding layers knows how to handle smart pointers (which really just means knowing how to instantiate C++ templates and call methods on them) then it’s trivial. If you’re a tool like SWIG, then you just spit out C++ code and make the C++ compiler handle all of this for you. If you’re something more like the Verona interop layer then you embed a C++ parser / AST generator / codegen path and make it do it for you.

                                                                            1. 1

                                                                              I’m coming from the perspective of having written interop layers for a few languages at this point.

                                                                              Yeah … same? I think it’s just that I tend to be obsessed with variations on C-like languages, which colors my perception. You sound like you’re a lot more broad in your interests.

                                                                              C only seems easy because C<->C interop requires a huge amount of boilerplate and so C programmers have a very low bar for what ‘transparent interoperability’ means.

                                                                              I don’t agree. Memory management is annoying, sure, and having to look up string ownership for every call gets old quick, but for a stateful UI like GTK you can usually even just let it leak. I mean, how many widgets does a typical app need? Grab heaptrack, identify a few sites of concern and jam frees in there, and move on with your life. It’s possible to do it shittily easily, and I value that a lot.

                                                                              If you’re a tool like SWIG, then you just spit out C++ code and make the C++ compiler handle all of this for you.

                                                                              Hey, no shade on SWIG. SWIG is great, I love it.

                                                                              From my vague memory, it uses GObject, which uses C macros to define objects and to manage reference counts. This means that whoever manages the binding layer has had to interop with C macros

                                                                              Nah, it’s really only a few macros, and they do fairly straightforward things. Last time I did GTK, I just wrote those by hand. I tend to make binders that do 90% of the work - the easy parts - and not worry about the rest, because that conserves total effort. With C that works out because functions usually take structs by pointer, so if there’s a weird struct that doesn’t generate I can just define a close-enough facsimile and cast it, and if there’s a weird function I define it. With C++ everything is much more interdependent - if you have a bug in the vtable layout, there’s nothing you can do except fix it.

                                                                              When I’ll eventually want Qt in my current language, I’ll probably turn to SWIG. It’s what I used in Jerboa. But it’s an extra step to kludge in, that I don’t particularly look forward to. If I just want a quick UI with minimal effort, GTK is the only game in town.

                                                                              edit: For instance, I just kludged this together in half an hour: https://gist.github.com/FeepingCreature/6fa2d3b47c6eb30a55846e18f7e0e84c This is the first time I’ve tried touching the GTK headers on this language. It’s exposed issues in the compiler, it’s full of hacks, and until the last second I didn’t really expect it to work. But stupid as it is, it does work. I’m not gonna do Qt for comparison, because I want to go to bed soon, but I feel it’s not gonna be half an hour. Now to be fair, I already had a C header importer around, and that’s a lot of time sunk into that that C++ doesn’t get. But also, I would not have attempted to write even a kludgy C++ header parser, because I know that I would have given up halfway through. And most importantly - that kludgy C header importer was already practically useful after a good day of work.

                                                                              edit: If there’s a spectrum of “if it’s worth doing, it’s worth doing properly” to “minimal distance of zero to cool thing”, I’m heavily on the right side. I think that might be the personality difference at play here? For me, a binding generator is purely a tool to get at a juicy library that I want to use. There’s no love of the craft lost there.

                                                              2. 1

                                                                So does plasma support Electron/Swift/Java/Kotlin? I know electron applications run on my desktop so I assume you mean directly as part of the desktop. If so that is pretty cool. Please forgive my ignorance, desktop UI frameworks are way outside my usual area of expertise.

                                                              3. 2

                                                                I only minimally use KDE on the computers at my university’s CS department, but I’ve been using cinnamon for almost four years now. I think that Plasma wins in the customizable aspect. There is just so many things that can be adjusted.

                                                                Cinnamon on the other hand feels far more polished, with fewer options for customization. I personally use cinnamon with Arch, but when I occasionally use Mint, the full desktop with all of mint’s applications is very cohesive and well thought out, though not without flaws.

                                                                I sometimes think that cinnamon isn’t evangelized as frequently because it’s well enough designed that it sort of fades into the background while using it

                                                          2. 3

                                                            I’ve used Cinnamon for years, but it inevitably breaks (or I break it). I recently looked into the alternatives again, and settled on KDE because it looked nice, it and Gnome are the two major players so things are more likely to Just Work, and it even had some functionality I wanted that Gnome didn’t. I hopped back to Cinnamon within the week, because yeah, the papercuts. Plasma looks beautiful in screenshots, and has a lot of nice-sounding features, but the moment you actually use it, you bang your face into something that shouldn’t be there. It reminded me of first trying KDE in the mid-2000s, and it was rather disappointing to feel they’ve been spinning in circles in a lot of ways. I guess that isn’t exactly uncommon for the Linux desktop though…

                                                            1. 3

                                                              I agree with your assessment of Plasma and GNOME (Shell). Plasma mostly looks fine, but every single time I use it–without fail–I find some buggy behavior almost immediately, and it’s always worse than just having misaligned labels on some UI elements, too. It’s more like I’ll check a setting checkbox and then go back and it’s unchecked, or I’ll try to put a panel on one or another edge of the screen and it’ll cause the main menu to open on the opposite edge like it looped around, or any other number of things that just don’t actually work right. Even after they caved on allowing a single-key desktop shortcut (i.e., using the Super key to open the main menu), it didn’t work right when I would plug/unplug my laptop from my desk monitors because of some weirdness around the lifecycle of the panels and the main menu button; I’d only be able to have the Super key work as a shortcut if it was plugged in or if it was not, but not both. That one was a little while ago, so maybe it’s better now.

                                                              Ironically, Plasma seems to be all about “configuration” and having 10,000 knobs to tweak, but the only way it actually works reasonably well for me is if you don’t touch anything and use it exactly how the devs are dog-fooding it.

                                                              The GNOME guys had the right idea when it came to stripping options, IMO. It’s an unpopular opinion in some corners, but I think it’s just smart to admit when you don’t have the resources to maintain a high bar of quality AND configurability. You have to pick one, and I think GNOME picked the right one.

                                                            2. 5

                                                              I have never understood why KDE isn’t the default VM for any serious linux distribution.

                                                              Me neither, but I’m glad to hear it is the default desktop experience on the recently released Steam Deck.

                                                              1. 3

                                                                Do SUSE/OpenSUSE not count as serious Linux distributions anymore?

                                                                It’s also the default for Manjaro as shipped by Pine64. (I think Manjaro overall has several variants… the one Pine64 ships is KDE-based.)

                                                                Garuda is also a serious Linux distribution, and KDE is their flagship.

                                                                1. 1

                                                                  I tried to use Plasma multiple times on Arch Linux but every time I tried it turned out to be too unstable. The most annoying bug I remember was that kRunner often crashed after entering some letters, taking down the whole desktop session with it. In the end I stuck with Gnome because it was stable and looked consistent. I do like the concept of Plasma but I will avoid it on any machine I do serious work with.

                                                                1. 10

                                                                  For server-to-client comms, SSE really can’t be beaten. I’ve used them in past projects and really appreciated their simple and straightforward nature.

                                                                  1. 1

                                                                    Did you take any measures not to get bitten by the 6 SSE connections per browser per domain limit? Is there a simple trick that would allow me not to worry about this at all, because I don’t like the thought of not being able to properly support more than 6 tabs pointing at my application.

                                                                    1. 7

                                                                      Looking into it, that seems to be an HTTP/1 limit. Apparently on HTTP/2, it’s 100 by default. https://developer.mozilla.org/en-US/docs/Web/API/EventSource

                                                                      1. 1

                                                                        Nice! thanks for mentioning this, I hadn’t noticed it. Then I see no down sides to using SSE for the typical live update needs of a web application.

                                                                      2. 3

                                                                        Easiest thing is probably to just distribute the connections across random subdomains if you’re worried about it.

                                                                        1. 1

                                                                          Afraid not — we were not running in a browser context. Sorry that I can’t help you out🙁

                                                                      1. 16

                                                                        Second paragraph and I’m already mad.

                                                                        Rule 8: Flags represent nations, not languages – so don’t use flags.

                                                                        Some obviously controversial flags: United Kingdom, Spain, France, Portugal. These examples have more speakers outside the origin country than within and is a very Euro-centric, colonial viewpoint of language to use any flag whatsoever. Not to mention, many countries have more than one language which further propagates stereotypes or belittlement of minority groups inside those countries.

                                                                        Luckily the Steady site doesn’t break this rule, just the blog entry.

                                                                        1. 12

                                                                          The reason flags are used is that if a website is in a language you do not understand you may not otherwise know where to click to change the language, or recognise the name of your language. The word for “English” in Russian is “английский”. Are you going to know to click on that unless there is a American or British flag next to it?

                                                                          Everyone knows that people get het up about flags. Flags are used despite this for usability reasons.

                                                                          1. 22

                                                                            That’s why the dropdown for language selection should list the name of each language in that language. Deutsch, English, Espanol, etc.

                                                                            1. 12

                                                                              My favorite “bug” I saw lately around this was a county drop-down that was translated to German but the sort order was still in English. So Germany (“Deutschland”) was not under “D”, but under “G” right after Gabun, where it is in the English sorting. Very confusing.

                                                                              1. 6

                                                                                This can also be fun for country names. Sending something to someone in France, I had to find the UK in a French-language drop-down. At school, I learned a few variations on how the country name is translated into French, this web site introduced me to a new one.

                                                                                1. 6

                                                                                  Le Royaume Uni I suppose?

                                                                                  The UK is a hard nut in these forms. I often try a number of options, but I can’t complain when even the Olympic team uses the wrong name (“Team GB” - the UK is not just Great Britain).

                                                                                  1. 1

                                                                                    A bit tangential, but UK government forms really threw me for a loop the first time I used one, as it’s the only place I’ve seen the adjective form of nationality used for the Citizenship/Nationality field. When listing my citizenship for visa/etc. purposes, on most countries’ forms it’s just a drop-down of country names. So usually you can find the USA under U somewhere (United States, USA, U.S.A., etc.). But for gov.uk it was under A for American instead (UK was likewise under B, for British). I’m not too knowledgeable about the details, but I assume this has something to do with the complexities of British nationality.

                                                                                2. 4

                                                                                  Sure, you just need to find the “язык” dropdown. Should be easy as the currently selected value will be русскийрусский which is obviously wrong.

                                                                                  1. 1

                                                                                    Yes! Languages get closer to nationalities than most other pictograms. I couldn’t know to pick a picture of Spain’s boarders to change language, nor would I know to click on the alphabet (which doesn’t work for languages without alphabets). And flags help. Then, you say what the language actually is in the drop down so you can select it…

                                                                                    Other rejected pictograms: Official bird View from the capitol Airport code Biggest company based there Slowly moving language names

                                                                                3. 8

                                                                                  As sibling noted, obviously they should be in the native language spelling (or maybe put both). With Spanish being the US’s #2 most spoken language (no official language), should those Spanish speakers not count and find it odd they speak Spanish daily and click the US flag where they live and get English? Should they look for a flag of Spain? Or Mexico? Or their birth country (which is likely missing)? “español” + es is very clear (even if all dialects aren’t yet translated) and doesn’t have the same degree of political baggage as flags and countries do. When people migrate – and they do a lot in the 21st century – their languages come with them because languages belong to the people and flags belong to the nation.

                                                                                  But do you really think people really know their flags? I don’t think this assertion is true. Which of these is Poland: 🇲🇨 🇵🇱 🇮🇩? Romania 🇦🇩 🇲🇩 🇷🇴? Ireland 🇨🇮 🇮🇪? Bolivia 🇧🇴 🇬🇭? Mali 🇸🇳 🇲🇱?


                                                                                  But imagine if a user could send their preferred language through the user-agent and the server or browser could choose a ‘good’ default for them … Accept-Language sounds like a good name for this, maybe even navigator.languages. That would be better than what Google does: ignoring my request and mislabeling me based on IP instead.

                                                                                  1. 3

                                                                                    If you did user research on American Spanish speakers I wonder how many be confused by the American flag being used to denote English. Have you ever tested this?

                                                                                    I think using Accept-Language by default would be a big improvement though it’s not a panacea. To some extent it just punts the issue to the browser. Changing your language in most browsers requires you download a language pack which you won’t have permission to do in internet cafes. Maybe that is no longer a problem now people have smart phones?

                                                                                    1. 4

                                                                                      Changing your language in most browsers requires you download a language pack

                                                                                      changing the Accept-Language header does not require any downloads. It is just a string that gets send. The browser UI stay as is.

                                                                                      1. 1

                                                                                        your browser will use the language defined by the OS 99% of the time

                                                                                        1. 4

                                                                                          Chrome and Firefox have easy to use setting to change the language header that you send to websites. There is nothing you have to install. You do not need admin rights for that. This has worked like this for the last 2 decades at least. That is what I am referring to.

                                                                                          If websites ignore the header, that is not a problem of the browser, but our industry.

                                                                                  2. 5

                                                                                    I live in India. We have at least 13 languages that have 10+ million speakers, and hundreds of minor languages in active use by smaller communities. Indian currency notes have text in 15 languages. From what I understand, there are several other countries with this kind of linguistic diversity (Nigeria, Pakistan, Indonesia, to name a few).

                                                                                    Using flags to represent languages is a Western European notion. I personally find it both disrespectful and confusing.

                                                                                    1. 6

                                                                                      Using flags to represent languages is a Western European notion. I personally find it both disrespectful and confusing.

                                                                                      It’s worse than that. It’s not just that it’s a Western European notion, the equivalence of language and country is one that has been specifically pushed by majority groups to marginalise minorities. Ask folks whose native language is Gaelic, Welsh, Breton, or Catalan, for example what they think of the equivalence and you’ll get a very different view.

                                                                                  3. 6

                                                                                    I think that is because they developed it for the European market, as their title suggests, and to illustrate their text with emoji/icons.

                                                                                    1. 5

                                                                                      for the record: the use of flags to signify languages has since been corrected in the article

                                                                                      1. 2

                                                                                        You love to see it :)

                                                                                        1. 8

                                                                                          nah actually i hated to see it 🙃 but instead of whining here i asked the author to reconsider… and it got fixed

                                                                                      2. 3

                                                                                        The screenshot uses a flag for German/“Deutsch” which I’ve never seen before, and German is my first language :)

                                                                                        1. 4

                                                                                          To quote the article:

                                                                                          First and foremost, and this is why this example has been used in this particular post, Revolve has bizarrely ended up with the flag of the United Arab Emirates for German

                                                                                        2. 2

                                                                                          What’s controversial about the Union Jack representing English, a language born of and primary to that soverign country?

                                                                                          1. 20

                                                                                            The Union Flag is the flag of several distinct political entities that have different sets of official languages:

                                                                                            • England does not have an official language, though practically English is a de-facto standard.
                                                                                            • Wales has English and Welsh as official languages. All official communications are required to be bilingual and some (such as tax things from HMRC) are bilingual for the whole of the UK as a result.
                                                                                            • Scotland has recognised Scottish Gaelic as an official language since 2005 and has had English as an official language since before then, though this recognition does not require government communication to be delivered in Gaelic and so has little effect. Scots (derived from Northumbrian Old English) is also supported by the Scottish government.
                                                                                            • The story of Irish Gaelic is very complicated because the English made an effort to marginalise it for a long time (the history of Ireland is largely omitted in English schools, on the basis that it’s just too embarrassing for the English). It now has similar status in Northern Ireland to Gaelic in Scotland.

                                                                                            So the flag points to at least three distinct language families and several overlapping ones. Only Wales (which is covered by the flag, but whose flag is not represented, in spite of being the part of the UK with the best flag) has a notion of an official language that carries any significant legal weight and it places English and Welsh on the same level.

                                                                                            You could probably use the George Cross to represent en_GB, although both Cornish (Celtic-family) and Scots (mostly the same ancestry as modern English, i.e. a creole of every language spoken by folks who invaded England over a period of a thousand years or so) originated in the area represented by that flag. Either way, you’re marginalising speakers of minority languages.

                                                                                            1. 2

                                                                                              I didn’t say anything about official languages or political entities, which is almost exactly my point. Primarily, British English is spoken throughout the United Kingdom, the soverign country in which it developed to the standard of English which was then spread throughout the world. The flag points to multiple regions with different languages, none of them as immediately relevant as the English language - you don’t see the Union Jack and think of Cornish. If the language was Scots, use the Scottish flag. If the language is Gaelic, use the Irish flag (or Ulster Banner, lol). To feign shock and horror at the Union Flag representing the history and origin of the English Language is inane.

                                                                                              1. 6

                                                                                                If I clicked the Scottish flag, I could be wanting either Scots or Gaelic, as mentioned by david_chisnall. Likewise, if I scanned for the word Gaelic, I’d personally be expecting Gàidhlig, not Irish. When it comes to English, there’s like half a dozen different flags that may have been chosen that I have to scan for (I have seen UK, USA, Canada, England, and Australia, frequently, and probably others less often), and that’s ignoring any personal feelings I have towards any of those. Country flags and $current_language names for other languages just aren’t the best way to display these things for translation pickers, for multiple reasons.

                                                                                            2. 6

                                                                                              Two issues:

                                                                                              1. The UK is by number of speakers, has the sixth most number of speakers. The language may have come from England, but without a standards body, it’s anyone’s language. The other variants of English are still very much valid and a part of the English language. Picking any of the nations is the wrong call.
                                                                                              2. In the weeds, historically England is the kingdom speaking English so if you want to go on history, 🏴󠁧󠁢󠁥󠁮󠁧󠁿 is the flag you are looking for which isn’t nearly as recognizable. Is this the Georgian flag? 🇬🇪

                                                                                              How do we avoid this issue? Just say “English” or en.

                                                                                              1. 4

                                                                                                It doesn’t reflect all the other kinds of english spoken by the far majority of the world. We even call it “British English” to distinguish it from other flavours of English like American, where there’s a number of spelling and pronounciation differences (these distinctions even get taught in school in non-english speaking countries).

                                                                                                1. 1

                                                                                                  hunspell lets you choose ‘-ise’ vs ‘-ize’ for British English, en-GB.

                                                                                                  1. 1

                                                                                                    There’s also words that differ between American and British English.

                                                                                                    1. 3

                                                                                                      That part is obvious but I think people forget how much diversity their is inside borders.

                                                                                                      1. 1

                                                                                                        Oh, absolutely. British English is pretty well-known for that since there is a wide variety of English spoken between Scotland and South England.

                                                                                                        Similarly Danish has different amount of grammatical genders between the islands. It is all the same Denmark with the same flag.

                                                                                                  2. 1

                                                                                                    And for American English, a U.S. flag is oft used. Perhaps one should’ve been used in the article, but having not used Steady, I wouldn’t know.

                                                                                                    1. 10

                                                                                                      What language would you expect behind a Belgian flag? French or Flemish? Similar goes for Swiss flag. Or Indian flag.

                                                                                                      1. 3

                                                                                                        What language would you expect behind a Belgian flag? French or Flemish?

                                                                                                        German of course! https://en.wikipedia.org/wiki/German_language#German_Sprachraum ;-)

                                                                                                        BTW, Flemish is a dialect, the official language is Dutch.

                                                                                                        1. 1

                                                                                                          The language most spoken in Belgium, obviously.

                                                                                                          1. 2

                                                                                                            It is a 55% vs 39% percent split, which part do you want to alienate by implying their language isn’t Belgian?

                                                                                                            1. 2

                                                                                                              That’s not the implication at all, it’s not making a comment on the validity of the non-majority language.

                                                                                                1. 2

                                                                                                  Similar reasoning to why Microsoft removed easter eggs from Windows around XP(?): https://docs.microsoft.com/en-gb/archive/blogs/larryosterman/why-no-easter-eggs

                                                                                                  It disappointed younger me, but after getting frustrated by other “fun” features in other tools over the years while trying to debug/fix things or do Serious Work™, I definitely agree these things should stay out of foundational tools. I do still miss the whimsy of messing around with a computer as a child and finding such things though.

                                                                                                  1. 2

                                                                                                    I mean, I think there’s still a place for that whimsy, myself. But I’m also a hobbyist in addition to a professional.

                                                                                                    Situated scripts are a nice place to strike that balance.

                                                                                                    1. 1

                                                                                                      And yet, edge://surf still exists.

                                                                                                    1. 23

                                                                                                      I love the attitude: you probably shouldn’t take it apart, but it’s your hardware, so here is instruction how to do this correctly.

                                                                                                      1. 18

                                                                                                        That’s the sort of attitude that IMO has made Steam basically the least-evil software distribution store. Not a high bar, unfortunately, but it’s something.

                                                                                                        1. 9

                                                                                                          They’re doing some pretty good work with Proton too, they claim the whole Steam library will be playable on the Steam Deck when it releases. Maybe the day I can switch my gaming PC to Linux isn’t too far off.

                                                                                                          1. 7

                                                                                                            Yep, and it coincidentally started happening just around the time that Microsoft was saying that the MS Store would become the only way to install programs on Windows 10. Somehow, Microsoft eventually decided that was a bad move once Valve started putting serious work into Linux compat and helping game developers port their games.

                                                                                                            Though it also means that about half my own game library works pretty darn well on Linux, so, can’t complain too much.

                                                                                                            1. 4

                                                                                                              Obviously it’s in their own self-interest to do it, and they’ve been pushing it so that they don’t have to pay Microsoft to preinstall Windows on their consoles, but it’s still a good thing overall.

                                                                                                              1. 2

                                                                                                                Yeah, I believe this is part of the reason Valve has embraced Linux since so long ago. Basically a bit of insurance against the dominance of Windows. I imagine they were well aware of the extreme dependence on MS playing nice (or whatever). I feel like I’ve read more about this very subject, will see if I can dig up any links or anything…

                                                                                                              2. 6

                                                                                                                Apparently they are getting anti-cheat software to be work in Linux too (EAC for example) which I thought I would never see happen in my lifetime.

                                                                                                                1. 2

                                                                                                                  I’m really curious about this. The closest any of these kernel-mode anticheats has come to Linux before is EAC, where they had an extremely basic version briefly for the game Rust, and were also working on a version that worked in WIne. Those were cancelled the moment Epic Games bought them though, so I’m unsure if they’ve managed to build limited support for the drivers into Proton, or whether they’ve made a deal with Epic to get that wine version going again.

                                                                                                              3. 5

                                                                                                                GoG is [almost] DRM-free, so I try to buy most games there. I wonder how to balance all of the evils against one another to choose the “least-evil”.

                                                                                                              4. 8

                                                                                                                All things considered, it’s quite amazing that it just takes 8 screws to open the unit and 3 more to replace the thumbstick. Replacing the internal SSD takes 4 more screws but they strongly discourage people from changing it just because they claim that the one that comes installed on it is selected for (1) power consumption and (2) minimal interference with the wifi module (but they also upcharge for more storage, so maybe they just don’t want people to buy the cheap version and swap the ssd on the side).

                                                                                                                It seems that they put some thought on trying to make the steam deck as serviceable as possible given its form factor.

                                                                                                              1. 104

                                                                                                                I’m not a big fan of pure black backgrounds, it feels a bit too « high contrast mode » instead of « dark mode ». I think a very dark gray would feel better to the eye. n=1 though, that’s just a personal feeling.

                                                                                                                Thanks for the theme, it’s still great!

                                                                                                                1. 29

                                                                                                                  Agreed, background-color: #222 is better than #000.

                                                                                                                  1. 15

                                                                                                                    I’ll just put my +1 here. The pure black background with white text isn’t much better than the opposite to me (bright room, regular old monitor). I’ve been using a userstyle called “Neo Dark Lobsters” that overall ain’t perfect, but is background: #222, and I’ll probably continue to use it.

                                                                                                                    On my OLED phone, pure black probably looks great, but that’s the last place I’d use lobste.rs, personally.

                                                                                                                    1. 18

                                                                                                                      Well, while we’re bikeshedding: I do like true black (especially because I have machines with OLED displays, but it’s also a nice non-decision, the best kind of design decision), but the white foreground here is a bit too intense for my taste. I’m no designer, but I think it’s pretty standard to use significantly lower contrast foregrounds for light on dark to reduce the intensity. It’s a bit too eye-burney otherwise.

                                                                                                                      1. 7

                                                                                                                        You have put your finger on something I’ve seen a few times in this thread: The contrast between the black background and the lightest body text is too high. Some users’ wishes to lighten the background are about that, and others’ are about making the site look like other dark mode windows which do not use pure black, and therefore look at home on the same screen at the same time. (Both are valid.)

                                                                                                                        1. 10

                                                                                                                          For me pure white and pure black is accessibility nightmare: that high contrast triggers my dyslexia and text starts to jump around, which starts inducing migraine.

                                                                                                                          As I default to dark themes systemwide and I couldn’t find way to override detected theme, this site is basically unusable for me right now. Usually in these cases I just close the tab and never come back, for this site I decided type this comment before doing that. Maybe some style change happens, manual override is implemented or maybe I care enough to setup user stylesheet.. but otherwise my visits will stop

                                                                                                                          1. 1

                                                                                                                            No need to be so radical, you still have several options. Not sure what browser you’re using, but Stylus is available for Chrome/FF:

                                                                                                                            https://addons.mozilla.org/en-US/firefox/addon/styl-us/

                                                                                                                            It allows to override the stylesheet for any website with just a few clicks (and few CSS declarations ;))

                                                                                                                            1. 9

                                                                                                                              I don’t mind the comment. There’s a difference between being radical because of a preference and having an earnest need. Access shouldn’t require certain people to go out of their way on a per-website basis.

                                                                                                                              1. 6

                                                                                                                                It’s not radical, it’s an accessibility problem.

                                                                                                                        2. 8

                                                                                                                          That’s great, thank you.

                                                                                                                          I wonder if I am an outlier in using the site on my phone at night frequently. Alternatively, maybe we could keep the black background only for the mobile style, where it’s more common to have an OLED screen and no other light sources in your environment.

                                                                                                                          1. 2

                                                                                                                            I don’t use my phone much, especially not for reading long-form content, so I wouldn’t be surprised if I was the outlier. That sounds like a reasonable solution, but it’s not going to affect me (since I can keep using a userstyle), so I won’t push either way. I will +1 the lower-contrast comments that others have posted, if it remains #000 though - the blue links are intense.

                                                                                                                            1. 1

                                                                                                                              The blue link color brightness is a point that not many have made. I think the reason I didn’t expect it is that I usually use Night Shift on my devices, which makes blue light less harsh at night. Do you think we should aim to solve this problem regardless of whether users apply nighttime color adjustment? Another way to ask this question: What do you think about dark mode blue links in the daytime?

                                                                                                                              1. 2

                                                                                                                                Sorry if I’m misunderstanding, but to clarify, my above comment is in a bright room; I try to avoid looking at screens in dim light/darkness. The blue links just look kind of dark, and intensely blue. Just a wee reduction in saturation or something makes it easier to read.

                                                                                                                                Thanks for your work on this btw. I looked into contributing something a while back, but was put off after it looked like the previous attempt stalled out from disagreement. I’d take this over the bright white any day (and it turns out this really is nice on my phone, dark blue links withstanding). The css variables also make it relatively easy for anyone here to make their own tweaks with a userstyle.

                                                                                                                                I feel like I’ve taken up enough space complaining here, so I’ll leave a couple nitpicks then take my leave: the author name colour is a little dark (similar to links, it’s dark blue on black), and the byline could do with a brightness bump to make it more readable, especially when next to bright white comment text.

                                                                                                                                1. 1

                                                                                                                                  I appreciate the clarification and other details :)

                                                                                                                            2. 1

                                                                                                                              My laptop is OLED and I’d still appreciate #000 there

                                                                                                                              1. 1

                                                                                                                                +1 to separate mobile style.

                                                                                                                            3. 4

                                                                                                                              I strongly agree.

                                                                                                                              I can’t put my finger on why, but I find very dark gray easier.

                                                                                                                              1. 1

                                                                                                                                #222 is way better! thank you

                                                                                                                              2. 14

                                                                                                                                I strongly disagree, and this black background looks and feels great to me! No one can ever seem to agree on the exact shade or hue of grey in their dark themes, so if you have the general UI setting enabled, you end up with a mishmash of neutral, cooler, hotter, and brighter greys that don’t look cohesive at all. But black is always black!

                                                                                                                                For lower contrast, I have my text color set to #ccc in the themes I have written.

                                                                                                                                1. 6

                                                                                                                                  Another user pointed out that pure black is pretty rare in practice, which makes this site stand out in an environment with other dark mode apps:

                                                                                                                                  Here’s a desktop screenshot with lobste.rs visible - notice that it’s the only black background on the screen.

                                                                                                                                  Does that affect your opinion like it did mine? I do see value in pure black, but suppose we treated the too-high-contrast complaint as a separate issue: Darkening the text could make the browser window seem too dim among the other apps.

                                                                                                                                  1. 3

                                                                                                                                    I prefer the black even in that scenario. The contrast makes it easier to read imo.

                                                                                                                                    1. 2

                                                                                                                                      Not all. If it gets swapped out for grey I will simply go back to my custom css, which I have used to black out most of the sites I visit, so no hard feelings.

                                                                                                                                  2. 8

                                                                                                                                    Feedback is most welcome! Would you please include the type of screen you’re using (OLED phone, TFT laptop…) and the lighting environment you’re in (dark room, daytime indoors with a window, etc.)? And do you feel differently in different contexts?

                                                                                                                                    I’ve got some comments about how I selected the colors in the PR, if that helps anyone think through what they would prefer.

                                                                                                                                    1. 4

                                                                                                                                      Sure! I’m on my iPhone 12 so OLED phone. I tried in with dimmed lights and in the dark, but in both cases I think I’d prefer a lighter background color.

                                                                                                                                    2. 7

                                                                                                                                      I disagree. Black is black. These off-gray variants just looks dirty and wrong to me.

                                                                                                                                      I love this theme.

                                                                                                                                    1. 3

                                                                                                                                      Thanks for this, looks great on mobile!

                                                                                                                                      A bit of a tangential question: when not using any particular desktop environment in Linux, is there a way to make Firefox ask for dark or light mode. I love it that the phones switch automatically depending on time of day and would love this for the desktop but I wasn’t able to find any pointers on how to do that.

                                                                                                                                      1. 7

                                                                                                                                        If you set ui.systemUsesDarkTheme to 1 in about:config, that should do it. Not sure about automating it though.

                                                                                                                                        1. 2

                                                                                                                                          Thanks! I’ll leave the automation for a rainy day but this is already great, thanks a lot.

                                                                                                                                        2. 1

                                                                                                                                          I’m pretty sure changing the Firefox theme to a dark one changes the color preference too.

                                                                                                                                        1. 1

                                                                                                                                          For pure information websites, stick to textual diagrams and art: http://len.falken.ink/philosophy/is-privacy-in-all-our-interests.txt

                                                                                                                                          It gets the point across.

                                                                                                                                          Otherwise I agree: compress and dither the hell out of images appropriately.

                                                                                                                                          1. 13

                                                                                                                                            The image is unreadable on my phone

                                                                                                                                            1. 6

                                                                                                                                              Ironically, this is what it renders like in my browser (Firefox 92 on macOS): https://x.icyphox.sh/DHcX6.png

                                                                                                                                              1. 1

                                                                                                                                                Yep, browsers suck at plain text. It’s pretty sad state of affairs.

                                                                                                                                                1. 10

                                                                                                                                                  So bleeding-edge Unicode good, 1990s graphics codecs bad?

                                                                                                                                                  Also, that non-ASCII art is really going to mess with screen readers. It’s non-semantic as heck.

                                                                                                                                                  1. 3

                                                                                                                                                    Now I wonder why there’s no semantic element for ASCII art in HTML5. RFCs are full of ASCII diagrams, for example.

                                                                                                                                                  2. 9

                                                                                                                                                    Or, an alternative reading: that “plain text” is composed of graphic characters that were only added to unicode last year, and is going to look just as broken in any other application on the same system without a suitable font, so perhaps graphics should be transferred/presented using an actual graphics format.

                                                                                                                                                    1. 1

                                                                                                                                                      My UTF-8 compatible terminal gives pretty much the same output. Find an example that doesn’t use ancient/outdated character sets.

                                                                                                                                                1. 17

                                                                                                                                                  Genuinely I don’t understand the point of this article.

                                                                                                                                                  I would pick even gnome or kde over windows’s awful GUI (really any of the recent ones, but certainly windows 10) even if I use i3. Using windows is just… annoying… frustrating… painful… I have top a of the line laptop from dell with an nvidia iGPU, 32GiB of RAM and a top of the line (at the time) intel mobile class CPU. But the machine still finds a reason to bluescreen, randomly shut-down without safely powering down my VMs, break or god knows what all the time. And when such a thing happens there’s no options to debug it, there’s no good documentation, no idea of where to even start. I’m glad windows works for some people, but it doesn’t work for me. What wakeup call? What do I need to wake up to? I use linux among other things, it’s not perfect but for me it’s the best option.

                                                                                                                                                  1. 10

                                                                                                                                                    (NB: I’m the author of the article, although not the one who submitted it)

                                                                                                                                                    Genuinely I don’t understand the point of this article.

                                                                                                                                                    The fact that it’s tagged “rant” should sort of give it away :P. (I.e. it’s entirely pointless!)

                                                                                                                                                    There is a bit of context to it that is probably missing, besides the part that @crazyloglad pointed out here. There is a remarkable degree of turnover among Linux users – nowadays I maybe know 6-7 people who use Linux or a BSD full time, but I know dozens who don’t use it anymore.

                                                                                                                                                    And I think one of the reasons for that is the constant software churn in the desktop space. Lots of things, including various GTK/Gnome or KDE components, ritually get tore down, burnt and rebuilt every 6-8 years or so, and at one point you just get perpetual beta fatigue. I’m not sure how else to call it. Much of it, in the last decade, has been in the name of “better” or “more modern” UX, and yet we’re not in a much better position than ten years ago in terms of userbase. Meanwhile, Microsoft swoops in and, on their second attempt, comes up with a pretty convincing Linux desktop, with a small crew and very little original thought around it, just by focusing on things that actually make a difference.

                                                                                                                                                    1. 15

                                                                                                                                                      I suspect that Microsoft is accidentally the cause of a lot of the problems with the Linux desktop. Mac OS, even back in the days when it didn’t have protected memory and barely qualified as an operating system, had a clear and coherent set of human interface guidelines. Nothing on the system was particularly flashy[1] and so it was hard to really understand the value of this consistency unless you used it for a few months. Small things like the fact that you bring up preferences in every application in exactly the same way (same menu location, same keyboard shortcut), text field navigation with mouse (e.g. selecting whole words) or shortcut keys is exactly the same, button order is consistent in ever dialog box. A lot of apps brought their own widget set, in part because ‘90s Microsoft didn’t want to give away the competitive edge of Office and so didn’t provide things in the system widget set that would have made writing an Office competitor too easy.

                                                                                                                                                      In contrast, the UI situation on Windows has always been a mess. Most dialog boxes put the buttons the wrong way around[2], but even that isn’t consistent and some put them the right way around. The ones that do get it right just put ‘okay’ and ‘cancel’ on the buttons instead of verbs (for example, on a Mac if you close a window without saving the buttons are ‘delete’, ‘cancel’, ‘save’).

                                                                                                                                                      Macs are expensive. Most of the people working on *NIX desktop environments come from Windows. If they’ve used a Mac, it’s only for a short period, not long enough to learn the value of a consistent UI[3]. People always copy the systems that they’re familiar with and when you’re trying to copy a system that’s a bit of a mess, it’s really hard to come up with something better. The systems that have tried to copy the Mac UI have typically managed the superficial bits (Aqua themes) and not got any of the parts that actually make the Mac productive to use.

                                                                                                                                                      [1] When OS X came out, Apple discovered that showing people the Genie animations for minimising in shops increased sales by a measurable amount. Flashiness can get the first sale, but it isn’t the thing that keeps people on the platform. Spinning cubes get old after a week of use.

                                                                                                                                                      [2] Until the ‘90s, it was believed that this should be a locale-dependent thing. In left-to-right reading order, the button implying go back should be on the left and the one implying go forwards should be on the right. In left-to-right reading order locales, it should be the converse. More recent research has shown that the causation was the wrong way around: left-to-right writing schemes are dominant because humans think left-to-right is forwards motion, people don’t believe left-to-right is forwards because that’s the order that they’re taught to read. Getting this wrong is really glaring now that web browsers are dominant applications because they all have a pair of arrows where <- means ‘go back’ and -> means ‘go forwards’, and yet will still pop up dialogs with the buttons ordered as [proceed] [go back] as if a human might find that intuitive.

                                                                                                                                                      [3] Apple has also been gradually making their UIs less consistent over the last 10-15 years as the HCI folks (people with a background in cognitive and behavioural psychology) retired and were replaced with UX folks (people who followed fads in what looks shiny and had no science to justify their decisions).

                                                                                                                                                      1. 13

                                                                                                                                                        IMHO the fact that, despite how messy it is, the Windows UI is so successful, points out at something that a lot of us don’t really want to admit, namely that consistency just isn’t that important. It’s not pointless, as the original Macintosh very convincingly demonstrated, especially with users who aren’t into computers as a hobby. But it’s not the holy grail, either.

                                                                                                                                                        Lots of people sneer at CAD apps (or medical apps, I have some experience with that), for example, because their UIs are old and clunky, and they’re happy to ascribe it to the fact that the megacorps behind them just don’t know how to design user interfaces for human users.

                                                                                                                                                        But if they were, in fact, to make a significant facelift, flat, large buttons, and hamburger menus and all, their existing users, who rely on these apps for 8 hours/day to make those mind-bogglingly complex PCBs and ICs, and who (individually or via their employers) pay those eye-watering licenses, would hate them and would demand for their money back and a downgrade. A facelift that modernized the interface and made it more “intuitive”, “cleaner” and “more discoverable” would be – justifiably! – treated as a (hopefully, but not necessarily) temporary productivity killer that’s entirely uncalled for: they already know how to use it, so there’s no point in making it more intuitive or more discoverable. Plus, these are CAD apps, not TikTok clones. The stakes are higher and you’re not going to rely on guts and interface discoverability, if you’re in doubt, you’re going to read the manual.

                                                                                                                                                        If you make applications designed to offer a quick distraction, or to hook people up and show them ads or whatever, it is important to get these things right, because it takes just two seconds of frustration for them to close that stupid app and move on – after all it’s not like they get anything out of it. Professional users obviously don’t want bad interfaces, either, but functionality is far more important to get right. If your task for the day is to get characteristic impedance figures for the bus lines on your design, and you have to choose between the ugly app that can do it automatically and the beautiful, distraction-free, modern-looking app that doesn’t, you’re gonna go with the ugly one, because you don’t get paid for staring at a beautiful app. And once you’ve learned how to do it, if the interface gets changed and you have to spend another hour figuring out how to do it, you’re going to hate it, because that’s one hour you spend learning how to do something you already knew how to do, and which is not substantially different than before – in other words, it’s just wasted time.

                                                                                                                                                        Lots of FOSS applications get this wrong (and I blame ESR and his stupid Aunt Tilly essay for that): they ascribe the success of some competitors to the beautiful UIs, rather than functionality. Then beautiful UIs do get done, sometimes after a long time of hard work and often at the price of tearing down old functionality and ending up with a less capable version, and still nobody wants these things. They’re still a footnote of the computer industry.

                                                                                                                                                        I’ve also slowly become convinced of something else. Elegant though they may be, grand, over-arching theories of human-computer interactions are just not very useful. The devil is in the details, and accounting for the quirky details of quirky real-life processes often just results in quirky interfaces. Thing is, if you don’t understand the real life process (IC design, neurosurgery procedures, operation scheduling, whatever), you look at the GUIs and you think they’re overcomplicated and intimidating, and you want to make them simpler. If you do understand the process, they actually make a lot of sense, and the simpler interfaces are actually hard to use, because they make you work harder to get all the details right.

                                                                                                                                                        That’s why academic papers on HCI are such incredible snoozefests to read compared to designer blogs, and so often leave you with questions and doubts. They make reserved, modest claims about limited scenarios, instead of grand, categorical statements about everyone and everything. But they do survive contact with the real world, and since they’re falsifiable, incorrect theories (like localised directionality) get abandoned. Whereas the grand esoteric theories of UX design can quickly weasel their way around counter-examples by claiming all sorts of exceptions or, if all else fails, by simply decreeing that users don’t know what they want, and that if a design isn’t as efficient as it’s supposed to be, they’re just holding it wrong. But because grand theories make for attractive explanations, they catch up more easily.

                                                                                                                                                        (Edit: for shits and giggles, a few years ago, I did a quick test. Fitts’ Law gets thrown around a lot as a reason for making widgets bigger, because they’re easier to hit. Nevermind that’s not really what Fitts measured 50 years ago – but if you bother to run the numbers, it turns out that a lot of these “easier to hit” UIs actually have worse difficulty figures, because while the targets get bigger, the extra padding from so many targets adds up and travel distances increase enough that the difficulty index is, at best, only marginally improved. I don’t remember what I tried to run numbers on, I think it was some dialogs in the new GTK3 release of Evolution and some KDE apps when the larger Oxygen theme – in some cases they were worse by 15%)

                                                                                                                                                        Apple has also been gradually making their UIs less consistent over the last 10-15 years as the HCI folks (people with a background in cognitive and behavioural psychology) retired and were replaced with UX folks (people who followed fads in what looks shiny and had no science to justify their decisions).

                                                                                                                                                        This isn’t limited to Apple, though, it’s been a general regression everywhere, including FOSS. I’m pretty sure you can use Planet Gnome to test hypertension meds at this point, some of the UX posts there are beyond enraging.

                                                                                                                                                        1. 1

                                                                                                                                                          AutoCAD did make a significant facelift, cloning the Office 2007 “ribbon” interface, also a significant facelift.

                                                                                                                                                          1. 1

                                                                                                                                                            AutoCAD is in a somewhat “privileged” position, in that it has an excellent command interface that most old-time users are using (I haven’t used AutoCAD in years, but back when I did, I barely knew what was in the menus). But even in their case, the update took a while to trickle down, it was not very well received, and they shipped the “classic workspace” option for years along with the ribbon interface (I’m not sure if they still do but I wouldn’t be surprised if they did).

                                                                                                                                                        2. 4

                                                                                                                                                          More recent research has shown that the causation was the wrong way around: left-to-right writing schemes are dominant because humans think left-to-right is forwards motion, people don’t believe left-to-right is forwards because that’s the order that they’re taught to read.

                                                                                                                                                          Do you have a good source for this? Arabic and Hebrew are prominent (and old!) right-to-left languages; it would seem more likely (to me) that a toss of the coin decided which direction a civilization wrote rather than “left-to-right is more natural and a huge chunk of civilization got it backwards.”

                                                                                                                                                        3. 2

                                                                                                                                                          There is a remarkable degree of turnover among Linux users – nowadays I maybe know 6-7 people who use Linux or a BSD full time, but I know dozens who don’t use it anymore.

                                                                                                                                                          I think that chasing the shiny object is to blame for a lot of that. Some times the shiny object really is better (systemd, for all its multitude of flaws, failures, misfeatures and malfeasances really is an improvement on the state of things before), sometimes it might be (Wayland might be worth it, in another decade, maybe), and sometimes it was not, is not and never shall be (here I think of the removal of screensavers from GNOME, of secure password sync from Firefox[0] and of extensions from mobile Firefox).

                                                                                                                                                          I don’t think it is coincidence that so many folks are using i3, dwm and StumpWM now — they really are better than the desktop environments.

                                                                                                                                                          But, for what it’s worth, I don’t think I know anyone who used to use Linux or a BSD, and I have been using Linux solely for almost 22 years now.

                                                                                                                                                          [0] Yes, Firefox still offers password sync, but it is now possible for Mozilla to steal your decryption key by delivering malicious JavaScript on a Firefox Account login. The old protocol really was secure

                                                                                                                                                          1. 3

                                                                                                                                                            I don’t think it is coincidence that so many folks are using i3, dwm and StumpWM now — they really are better than the desktop environments.

                                                                                                                                                            They are, but it’s also really disappointing. The fact that tiling a bunch of VT-220s on a monitor is substantially better, or at least a sufficiently good alternative for so many people, to GUIs developed 40 years after the Xerox Star, really says a lot about the quality of said GUIs.

                                                                                                                                                            But, for what it’s worth, I don’t think I know anyone who used to use Linux or a BSD, and I have been using Linux solely for almost 22 years now.

                                                                                                                                                            This obviously varies a lot, I don’t wanna claim that what I know is anything more than anecdata. But e.g. everyone in what used to be my local LUG has a Mac now. Some of them use Windows with Cygwin or WSL, mostly because they still use some old tools they wrote or their fingers are very much used to things like bc. I still run Linux and OpenBSD on most of my machines, just not the one I generally work on, that’s a Mac, and I don’t like it, I just dislike it the least.

                                                                                                                                                          2. 1

                                                                                                                                                            That churn is extremely superficial, though. I can work comfortably on anything from twm to latest ubuntu.

                                                                                                                                                          3. 9

                                                                                                                                                            I do have a linux machine for my work stuff running KDE. And I love the amount of stuff I can customize, hotkeys that can be changed out of the box, updates I can control etc.

                                                                                                                                                            But if you get windows to run in a stable manner (look out for updates, disable fast start/stop, disable some annoying services, get a professional version so it allows you to do that, get some additions for a tabbed explorer, remove all them ugly tiles in the start menu, disable anything that has “cortana” in its name and forget windows search), then you will have a better experience on windows. You’ll not have to deal with broken GPU drivers, you’ll not have to deal with broken multi-display multi-DPI stuff, which includes no option to scale differently, display switching crashing your desktop, laptops going back to sleep because you were too fast in closing its lid on bootup when you connected an external display. You’ll not have to deal with your pricey GPU not getting used for video encoding and decoding. Browsers not using hardware acceleration and rendering 90% on the CPU. Games being broken or not using the GPU fully. Sleep mode sometimes not waking up some PCIE device, leading to a complete hangup of the laptop. So the moment you actually want to use your hardware fully, maybe even game on that and do anything that is more than a 1 display system with a CPU, you’ll be pleased to use windows. And let’s not talk about driver problems because of some random changes in linux that breaks running a printer+scanner via USB. That is the sad truth.

                                                                                                                                                            Maybe Wayland will change at least the display problems, but that doesn’t fix anything regarding broken GPU support. And no matter whose fault it is, I don’t buy a PC for 1200€, just so I can watch my PC trying to render my desktop in 4k on the CPU, tearing in videos and random flickering when doing stuff with blender. I’m not up to tinkering with that, I want to tinker with software I built, not with some bizarre GPU driver and 9k Stackoverflow/Askubuntu/Serverfault entries of people who all can’t do anything, because proprietary GPU problems are simply a blackbox. I haven’t had any bluescreen in the last 5 years except one, and that was my fault for overloading the VRAM in windows.

                                                                                                                                                            And at that point WSL2 might actually be a threat, because it might allow me to just ditch linux on my box entirely and get the good stuff in WSL2 but remove the driver pain (while the reverse isn’t possible). Why bother with dual boot or two machines if you can use everything with a WSL2 setup. It might even fix the hardware acceleration problem in linux, because windows can just hand over a virtualized GPU that uses the real one underneath using the official drivers. I won’t have to tell people to try linux on the desktop, they can just use WSL2 for the stuff that requires it and leave the whole linux desktop on the side, along with all the knowledge of installing it or actually trying out a full linux desktop. (I haven’t used it at this point) What this will do is remove momentum and eventually interest from people to get a good linux desktop up and running, maybe even cripple the linux kernel in terms of hardware support. Because why bother with all those devices if you’re reduced to running on servers and in a virtualized environment of windows, where all you need are the generic drivers.

                                                                                                                                                            I can definitely see that coming. I’ve used linux primarily pre corona, and now that I’m home most of them time I’m dreading to start my linux box.

                                                                                                                                                            1. 1

                                                                                                                                                              look out for updates

                                                                                                                                                              What do you mean by this? Are you saying I should manually review and read about every update?

                                                                                                                                                              disable fast start/stop

                                                                                                                                                              Done

                                                                                                                                                              disable some annoying services

                                                                                                                                                              I’m curious which ones but I think I disabled most of them.

                                                                                                                                                              get a professional version so it allows you to do that

                                                                                                                                                              Windows 10 enterprise.

                                                                                                                                                              get some additions for a tabbed explorer

                                                                                                                                                              Can you recommend some?

                                                                                                                                                              remove all them ugly tiles in the start menu, disable anything that has “cortana” in its name and forget windows search)

                                                                                                                                                              Done and done and done

                                                                                                                                                              broken GPU drivers

                                                                                                                                                              I haven’t had to deal with this yet, but I’ve had multiple instances where USB stopped working, bluetooth stopped working and the dock stopped working which previously worked before a windows update but then required me to manually update the drivers after the windows update to get it working again.

                                                                                                                                                              multi-display multi-DPI

                                                                                                                                                              I don’t think there currently exists any non-broken multi-DPI solution on windows or any other platform and so I avoid having this problem in the first place. The windows solution to this problem is just as bad as the wayland one. You can’t solve this problem if you rasterize before knowing where the pixels will end up being. You would need a new model for how you describe visuals on a screen which would be vector graphics oriented.

                                                                                                                                                              display switching crashing your desktop, laptops going back to sleep because you were too fast in closing its lid on bootup when you connected an external display

                                                                                                                                                              I have had the first one happen a few times on windows, the second issue is something I don’t run into since I don’t currently run my laptop with the lid closed while using external displays, but it’s something I’ve planned to move to. I’ve been procrastinating moving to this setup because of the number of times I’ve seen it break for coworkers (running the same hardware and software configuration). I’ve never had a display switch crash anything on linux, although I’ve had games cause X to crash but at least I had a debug log to work from at that point and could at least see if I can do something about it.

                                                                                                                                                              Games being broken or not using the GPU fully.

                                                                                                                                                              Gaming on linux, if you don’t mind doing an odd bit of tinkering, has certainly been a lot less stressful than gaming on windows, which works fine until something breaks and then there’s absolutely zero information which is available to fix it. It’s not ideal but I play VR games on linux, I take advantage of my hardware, it’s a very viable platform especially when I don’t want to deal with the constant shitty mess of windows. I’ve never heard of a game not using the GPU fully (when it works).

                                                                                                                                                              So the moment you actually want to use your hardware fully, maybe even game on that and do anything that is more than a 1 display system with a CPU, you’ll be pleased to use windows.

                                                                                                                                                              I use windows and linux on a daily basis. I’m pleased to use linux, I sometimes want to change jobs because of having to use windows.

                                                                                                                                                              And let’s not talk about driver problems because of some random changes in linux that breaks running a printer+scanner via USB.

                                                                                                                                                              Or when you update windows and your printer+scanner no longer works. My printing experience on linux has generally been more pleasant than linux because printers don’t suddenly become bricks just because microsoft decides to force you to update to a new version of windows overnight.

                                                                                                                                                              Printers still suck (and so do scanners) but I’ve mitigated most problems by sticking to supported models (of which there are plenty of good online databases).

                                                                                                                                                              1. 1

                                                                                                                                                                I don’t think there currently exists any non-broken multi-DPI solution on windows or any other platform and so I avoid having this problem in the first place. The windows solution to this problem is just as bad as the wayland one. You can’t solve this problem if you rasterize before knowing where the pixels will end up being. You would need a new model for how you describe visuals on a screen which would be vector graphics oriented.

                                                                                                                                                                I have no problems moving windows between HighDPI and normal 1080p displays on windows. WIndows 11 will fix a lot of the multi-screen issues of moving windows to the wrong display.

                                                                                                                                                                Meanwhile my Linux-Box can’t even render videos on 4k due to missing hardware acceleration (don’t forget the tearing). And obviously it’s not capable of different scaling between the HighDPI and the 1080p display. Thus it’s a blurry 2k res on a 4k display. And after logging into the 2k screen, my whole plasmashell is crashed, which is why I’ve got a bash command hotkey to restart it.

                                                                                                                                                                I haven’t had to deal with this yet, but I’ve had multiple instances where USB stopped working, bluetooth stopped working and the dock stopped working which previously worked before a windows update but then required me to manually update the drivers after the windows update to get it working again.

                                                                                                                                                                I never had any broken devices after an update or a malfunctioning system. Only one BSOD directly after an upgrade, which fixed itself with a restart.

                                                                                                                                                                I’ve never heard of a game not using the GPU fully

                                                                                                                                                                Nouveau is notorious for not being able to control clock speed and the drivers not being able to use as much capacity. Fixing a bad GPU driver on linux had me reinstalling the whole OS multiple times.

                                                                                                                                                                1. 2

                                                                                                                                                                  I have no problems moving windows between HighDPI and normal 1080p displays on windows. WIndows 11 will fix a lot of the multi-screen issues of moving windows to the wrong display.

                                                                                                                                                                  Same experience here. I tried using a Linux + Windows laptop for 7 months or so. Windows mixed DPI support is generally good, including fractional scaling (which is what you really want on a 14” 1080p screen). The exception are some older applications, which have blurry fonts. Mixed DPI on macOS is nearly flawless.

                                                                                                                                                                  On Linux + GNOME it is ok if you use Wayland and all your screens use integer scaling. It all breaks down once you use fractional scaling. X11 applications are blurry (even on integer-scaled screens) because they are scaled up. Plus rendering becomes much slower with fractional scaling.

                                                                                                                                                                  Meanwhile my Linux-Box can’t even render videos on 4k due to missing hardware acceleration (don’t forget the tearing).

                                                                                                                                                                  I did get it to work, both on AMD and NVIDIA (proprietary drivers). But it pretty much only works on applications that have good support for VA-API (e.g. mpv) or NVDEC and to some extend with Firefox (you have to enable enable experimental options, force h.264 on e.g. youtube, and it crashes more often). With a lot of applications, like Zoom, Skype, or Chrome, rendering happens on the CPU and it blows away your battery life and you have constantly spinning fans.

                                                                                                                                                                  1. 1

                                                                                                                                                                    Yeah the battery stuff is really annoying. I really hope wayland will finally take over everything and we’ll have at least some good scaling. Playback on VLC works, but I actually don’t want to have to download everything to play it smoothly, so firefox would have to work first with that.. (And for movie streaming you can’t download stuff.)

                                                                                                                                                                  2. 1

                                                                                                                                                                    I have no problems moving windows between HighDPI and normal 1080p displays on windows. WIndows 11 will fix a lot of the multi-screen issues of moving windows to the wrong display.

                                                                                                                                                                    If you completely move a window between two displays, the problem is easy-ish to solve with some hacks, it’s easier to solve if your dpi is a multiple of the other dpi. And issues especially occur when windows straddle the screen boundary. Try running a game across two displays on a multi-dpi setup, you will either end up with half the game getting downscaled from 4k (which is a waste of resources and your gpu probably can’t handle that at 60fps) or you end up with a blurry mess on the other screen. When I did use multi-dpi on windows as recent as windows 10 there were still plenty of windows core components which would not render correctly when you did this. You would either get blurryness or just text rasterization which looked off.

                                                                                                                                                                    But like I said, this problem is easily solved by not having a multi-dpi setup. No modern software fully supports this properly, and no solution is fully seamless, just because YOU can’t personally spot all the problems doesn’t mean that they don’t exist. Some people’s standards for “working” are different or involve different workloads.

                                                                                                                                                                    Meanwhile my Linux-Box can’t even render videos on 4k due to missing hardware acceleration (don’t forget the tearing).

                                                                                                                                                                    Sounds like issues with your configuration. I run 4k videos at 60Hz with HDR from a single board computer running linux, it would run at 10fps if it had to rely solely on the CPU. It’s a solved problem. If you’re complaining because it doesn’t work in your web browser, I can sympathise there, but that’s not because there’s no support for it, it’s just that by default it’s disabled (at least on firefox) for some reason. You can enable it by following a short guide in 5 minutes and never have to worry about it again. A small price to pay for an operating system that actually does what you ask it to.

                                                                                                                                                                    And obviously it’s not capable of different scaling between the HighDPI and the 1080p display. Thus it’s a blurry 2k res on a 4k display.

                                                                                                                                                                    Wayland does support this (I think), but like I said, there is no real solution to this which wouldn’t involve completely redesigning everything including core graphics libraries and everyone’s mental model of how screens work.

                                                                                                                                                                    Really, getting hung up on multi-dpi support seems a little bit weird. Just buy a second 4k display if you care so much.

                                                                                                                                                                    And after logging into the 2k screen, my whole plasmashell is crashed, which is why I’ve got a bash command hotkey to restart it.

                                                                                                                                                                    Then don’t use plasma.

                                                                                                                                                                    At least on linux you get the choice not to use plasma. When windows explorer has its regular weekly breakage the only option I have is rebooting windows. I can’t even replace it.

                                                                                                                                                                    Heck, if you are still hung up on wanting to use KDE then fix the bug. At least with linux you have the facilities to do this. When bugs like this appear on windows (especially when they only affect a tiny fraction of users) there’s no guarantee when or if it will be fixed. I don’t keep track but I’ve regularly encountered dozens of different bugs in windows over the course of using it for the past 15 years.

                                                                                                                                                                    I never had any broken devices after an update or a malfunctioning system. Only one BSOD directly after an upgrade, which fixed itself with a restart.

                                                                                                                                                                    Good for you. My point is that your experience is not universal and that there are people for whom linux breaks a lot less than windows. You insisting this isn’t the case won’t make it so.

                                                                                                                                                                    Nouveau is notorious for not being able to control clock speed and the drivers not being able to use as much capacity.

                                                                                                                                                                    Which matters why?

                                                                                                                                                                    If someone wrote a third party open source nvidia driver for windows would you claim that windows can’t take full advantage of hardware? What kind of argument is this?

                                                                                                                                                                    Nouveau is one option, it’s not supported by nvidia, no wonder it doesn’t work as well when it’s based on reverse engineering efforts. However, this would only be a valid criticism if there were not nvidia supported proprietary nvidia gpu drivers for linux which worked just fine. If you want a better experience with open source drivers then pick hardware which has proper linux support like intel or amd gpus. I’ve ran both and although I now refuse to buy nvidia on the principle that they just refuse to try to cooperate with anyone, it actually worked fine for over 5 years of linux gaming.

                                                                                                                                                                    1. 5

                                                                                                                                                                      I agree with a lot of your post, so I’m not going to repeat that (other than adding a strong +1 to avoiding nvidia on that principle), but I want to call out this:

                                                                                                                                                                      Really, getting hung up on multi-dpi support seems a little bit weird. Just buy a second 4k display if you care so much.

                                                                                                                                                                      It may not be a concern to you, but that doesn’t mean it doesn’t affect others. There are many cases where you’d have displays with different densities, and two different-density monitors is just one. Two examples that I personally have:

                                                                                                                                                                      1. My work macbook has a very high DPI display, but if I want more screen space while working from home, I have to plug in one of my personal 24” 1080p monitors. The way Apple do the scaling isn’t the best, but different scaling per display is otherwise seemless. Trying to do that with my Linux laptop is a mess.
                                                                                                                                                                      2. I have a pen display that is a higher density than my regular monitors. It’s mostly fine since you use it up-close, but being able to bump it up to 125% or so would be perfect. That’s just not a thing I can do nicely on my Linux desktop. I’m planning to upgrade it at some point soon to one that’s even higher density, where I’m guessing 200% scaling would work nicely, but I may end up stuck having to boot into Windows to use it at all.

                                                                                                                                                                      There are likely many other scenarios where it’s not “simply” a case of upgrading a single monitor, but also, the “Just buy [potentially very expensive thing]” argument is incredibly weak and dismissive in its own right.

                                                                                                                                                                      1. 1

                                                                                                                                                                        My work macbook has a very high DPI display, but if I want more screen space while working from home, I have to plug in one of my personal 24” 1080p monitors. The way Apple do the scaling isn’t the best, but different scaling per display is otherwise seemless. Trying to do that with my Linux laptop is a mess.

                                                                                                                                                                        I get that, but my point is that you can just get a second 1080p monitor and close your laptop. Or buy two high DPI monitors.

                                                                                                                                                                        Really, the problem I have with this kind of criticism is that although valid, I would rather have some DPI problems and a slightly ugly UI because I had to display 1080p on a 4k display than have all the annoying problems I have with windows, especially when I have actual work to do. It’s incredibly stressful to have the hardware and software I am required by my company to use cause hours of downtime or work lost per week. With linux, there is a lot less stress, I just have to be cognizant of making the right hardware buying decisions.

                                                                                                                                                                        I have a pen display that is a higher density than my regular monitors. It’s mostly fine since you use it up-close, but being able to bump it up to 125% or so would be perfect. That’s just not a thing I can do nicely on my Linux desktop. I’m planning to upgrade it at some point soon to one that’s even higher density, where I’m guessing 200% scaling would work nicely, but I may end up stuck having to boot into Windows to use it at all.

                                                                                                                                                                        I think you should try wayland. It can do scaling and I think I have even seen it work (about as well as multi-dpi solutions can work given the state of things).

                                                                                                                                                                        If you are absolutely stuck on X there are a couple of workarounds, one is launching your drawing application at a higher DPI. It won’t change if you move it to a different screen but it is not actually that big of a hack and will probably solve your particular problem. I even found a reddit post for it: https://old.reddit.com/r/archlinux/comments/5x2syg/multiple_monitors_with_different_dpis/

                                                                                                                                                                        The other hack is to run 2 X servers but that’s really unpleasant to work with. But since you are using a specific application on that display this may work too.

                                                                                                                                                                        potentially very expensive thing

                                                                                                                                                                        If you’re dealing with a work mac, get your workplace to pay for it.

                                                                                                                                                                        Enterprise Windows 10 licenses cost money too, not as much as good monitors, but they’re not an order of magnitude more expensive (although I guess it depends on if you buy them from apple).

                                                                                                                                                                        1. 2

                                                                                                                                                                          I get that, but my point is that you can just get a second 1080p monitor and close your laptop. Or buy two high DPI monitors.

                                                                                                                                                                          Once again, the “just pay more money” is an incredibly dismissive and weak argument, unless you’re willing to start shelling out cash to strangers on the internet. If someone had the means and desire to do so, they obviously would have done so already.

                                                                                                                                                                          I think you should try wayland. It can do scaling

                                                                                                                                                                          Wayland may be suitable in my particular case (it’s not), but it’s also not near a general solution yet.

                                                                                                                                                                          If you’re dealing with a work mac, get your workplace to pay for it.

                                                                                                                                                                          I was using it as an example - forget I used the word “work” and it holds just as true. My current setup is “fine” for me, but I’m not the only person in the world with a macbook, a monitor, and a desire to plug the two together.


                                                                                                                                                                          The entire point of my comment wasn’t to ask for solutions to two very specific problems I personally have; it was to point out that you’re being dismissive of issues that you yourself don’t have, while also pointing out that someone else’s issues are not everyone’s. To use your own words, “My point is that your experience is not universal”.

                                                                                                                                                                          1. 0

                                                                                                                                                                            Once again, the “just pay more money” is an incredibly dismissive and weak argument, unless you’re willing to start shelling out cash to strangers on the internet. If someone had the means and desire to do so, they obviously would have done so already.

                                                                                                                                                                            No, actually, let’s bring this thread back to it’s core.

                                                                                                                                                                            Some strangers on the internet (not you) are telling me that windows is so great and that it will solve all my problems, or that linux has massive irredeemable problems and then proceed to list “completely fucking insignificant” (in my opinion) UI and scaling issues compared to my burnout inducing endless hell of windows issues. Regarding the problems they claim it solves: it either doesn’t solve (because they don’t exist on linux so there is nothing to solve), or are not things that windows solves to my satisfaction, or are not things I consider problems at all (and in multiple cases, I don’t think that’s just me, I think the person is just mislead as to either the definition of a linux problem or just has a unique bad experience).

                                                                                                                                                                            What’s insulting is the rest of this thread (not you) of people who keep telling me how wrong I am about my consistent negative experience with windows and positive experience with linux and how amazing windows is because you can play games with intrusive kernel mode anti cheat as if not being able to run literal malware is one of the biggest problems I should, according to them, be having with linux.

                                                                                                                                                                            My needs are unconventional, they are not met in an acceptable manner by windows. I started off by saying “I’m glad windows works for some people, but it doesn’t work for me.” I wish people actually read that part before they started listing off how windows can solve all my problems. I use windows on a daily basis and I hate it.

                                                                                                                                                                            So really, what is “an incredibly dismissive and weak argument” is people insisting that the solutions that work for me are somehow not acceptable when I’m the only one who has to accept them.

                                                                                                                                                                            I am not surprised you got turned around and started thinking that I was trying to dismiss other people’s experiences with windows in linux because that’s what it would look like if you read this thread as me defending linux as a viable tool for everyone. It is not, I am simply defending linux as a viable tool for me.

                                                                                                                                                                      2. 3

                                                                                                                                                                        I don’t want to use things on multiple screens at the same time, I want them to be able to move across different displays while changing their scaling accordingly.. And that is already something I want when connecting one display to one laptop, you don’t want your 1080p laptop scaled like your 1080p display. And I certainly like writing on higher-res displays for work.

                                                                                                                                                                        When I did use multi-dpi on windows as recent as windows 10 there were still plenty of windows core components which would not render correctly when you did this. You would either get blurryness or just text rasterization which looked off.

                                                                                                                                                                        Which are 0 of my daily drivers. Not browsers, explorer, taskmanager, telegram, discord, steam, VLC, VS, VSCode..

                                                                                                                                                                        Then don’t use plasma

                                                                                                                                                                        And then what ? i3 ? gnome ? Could just use apple, they have a unix that works at least. “just exchange the whole desktop experience and it might work again”, sounds like a nice solution.

                                                                                                                                                                        When bugs like this appear on windows (especially when they only affect a tiny fraction of users) there’s no guarantee when or if it will be fixed.

                                                                                                                                                                        And on linux you’ll have to pray somebody hears you in the white noise of people complaining and actually fixes stuff for you, and doesn’t leave it for years as an bug report in a horrible bugzilla instance. Or you just start being the expert yourself, which is possible if you’ve got nothing to do. (And then have fun bringing that fix upstream.) It’s not that simple. It’s nice to have the possibility of recompiling stuff yourself, but that doesn’t magically fix the problem nor gives your the knowledge how to do so.

                                                                                                                                                                        You insisting this isn’t the case won’t make it so.

                                                                                                                                                                        And that’s where I’m not sure it’s worth to discuss any further. Because you’re clearly down-sizing linux GPU problems to “just tinker with it/just use wayland even if it breaks many programs” while complaining about the same on windows. My experience may be different to yours, but the comments and votes here, plus my circle of friends (and many students at my department) are speaking for my experience. One were people complain about windows and hate it its update policy. But love it for simply working with games(*), scaling where linux fails flat on its head and other features. You seem to simply ignore everyone that doesn’t want to tinker around with their GPU setup. No your firefox won’t be able to do playback on 4k screen out of the box, it’ll do that on your CPU by default. We even had submissions here about how broken those interfaces are, so firefox and chrome disabled their support on linux for GPU acceleration and only turned it back on for some card after some time. Seems to be very stable..

                                                                                                                                                                        I like linux, but I really dread its shortcomings for everything that is consumer facing and not servers I can hack with and forget about UIs. And I know for certain how bad windows can be. I’ve set up my whole family on linux, so it can definitely work. I only have to explain to them again why blender on linux may just crash randomly.

                                                                                                                                                                        (*) Yes, all of them, including anti-cheats, which won’t work on linux or you’ll gamble when they will bann you. I know some friends running a hyperv-emulation in KVM to get them to run on rainbow..

                                                                                                                                                                        1. 1

                                                                                                                                                                          taskmanager

                                                                                                                                                                          The fact that taskmanager is one of your daily driver applications is quite funny.

                                                                                                                                                                          … VS, VSCode

                                                                                                                                                                          I certainly use more obscure applications than these, so it explains why I have more obscure problems.

                                                                                                                                                                          And then what ? i3 ? gnome ? Could just use apple, they have a unix that works at least. “just exchange the whole desktop experience and it might work again”, sounds like a nice solution.

                                                                                                                                                                          KDE has never been the most stable option, it has usually been the prettiest though. I’m sorry about the issues you’re having but really at least you have options unlike on windows.

                                                                                                                                                                          And on linux you’ll have to pray somebody hears you in the white noise of people complaining and actually fixes stuff for you, and doesn’t leave it for years as an bug report in a horrible bugzilla instance. Or you just start being the expert yourself, which is possible if you’ve got nothing to do. (And then have fun bringing that fix upstream.) It’s not that simple. It’s nice to have the possibility of recompiling stuff yourself, but that doesn’t magically fix the problem nor gives your the knowledge how to do so.

                                                                                                                                                                          You have to pray someone hears you regardless. The point is that on linux you can actually fix it yourself, or switch the component out for something else. On windows you don’t have either option.

                                                                                                                                                                          And then have fun bringing that fix upstream.

                                                                                                                                                                          Usually much easier than trying to get someone else to fix it. Funnily enough projects love bug fixes.

                                                                                                                                                                          It’s not that simple.

                                                                                                                                                                          I’ll gladly take not simple over impossible any day.

                                                                                                                                                                          And that’s where I’m not sure it’s worth to discuss any further. Because you’re clearly down-sizing linux GPU problems to “just tinker with it/just use wayland even if it breaks many programs” while complaining about the same on windows.

                                                                                                                                                                          I genuinely have not had this mythical gpu worst case disaster scenario you keep describing. So I’m not “down-sizing” anything, I am just suggesting that maybe it’s your own fault. Really, I’ve used a very diverse set of hardware over the past few years. The point I’ve been making repeatedly is that “tinkering” to get something to work on linux is far easier than “copy pasting random commands from blog posts which went dead 10 years ago until something works” on windows. When things break on linux it’s a night and day difference in debugging experience compared to windows, and you do need to know a little bit about how things work, but I’ve used windows for longer than I have used linux and I know less about how it works despite my best efforts to learn.

                                                                                                                                                                          Your GPU problems seem to stem from the fact that you are using nouveau. Stop using nouveau. It won’t break anything, it will just mean you can stop complaining about everything being broken. It might even fix your plasma crashes when you connect a second monitor.

                                                                                                                                                                          My experience may be different to yours, but the comments and votes here, plus my circle of friends (and many students at my department) are speaking for my experience.

                                                                                                                                                                          I could also pull out a large suite of anecdotes but really that won’t make an argument, so maybe let’s not go there?

                                                                                                                                                                          But love it for simply working with games(*),

                                                                                                                                                                          Some games not working on linux is not a linux problem. Despite absolute best efforts by linux users to make it their problem. Catastrophically anti-consumer and anti-privacy anti-cheat solutions are not something you can easily make work on linux for sure, but I’m not certain I want it to work.

                                                                                                                                                                          scaling where linux fails flat on its head

                                                                                                                                                                          I’ll take some scaling issues and being able to actually use my computer and get it to do what I want over work lost, time lost and incredible stress.

                                                                                                                                                                          No your firefox won’t be able to do playback on 4k screen out of the box, it’ll do that on your CPU by default.

                                                                                                                                                                          Good to know you read the bit of my comment where I already addressed this.

                                                                                                                                                                          Seems to be very stable..

                                                                                                                                                                          Okay, at this point you’re close to just being insulting. Let me spell it out for you:

                                                                                                                                                                          Needing to configure firefox to use hardware acceleration, not having a hacky automatic solution for multi-DPI on X, not being able to play games which employ anti-cheat solutions which orwell couldn’t imagine, some UI inconsistencies, having to tinker sometimes. These are all insignificant problems compared to the issues I have with windows on a regular basis. You said it yourself, you use a web browser, two web browser based programs, 3 programs developed by microsoft to work on windows (although that’s never stopped them from being broken for me) and a media player which statically links mplayer libraries which were developed not for windows and a chat client. Your usecase is vanilla.

                                                                                                                                                                          My daily driver for work is running VMWare workstation running on average about 3 VMs, firefox, emacs, teams, outlook, openvpn, onenote. I sometimes also have to run a gpu accellerated password cracker. For everything else I use a linux vm running arch and i3 because it’s so much faster to actually get shit done. Honestly my usecase isn’t that much more exciting either. I have daily issues with teams, outlook, and onenote (but those are not windows issues, it’s just that microsoft can’t for the life of them write anything that works). The windows UI regularly stops working after updates (I think this is due to the strict policies applied on the computer to harden it, these were done via group policy). The windows UI regularly crashes when connecting and disconnecting a thunderbolt dock. I have suspend and resume issues all the time, including issues where the machine will bluescreen coming out of suspend when multiple VMs are running. VM hardware passthrough has a tendency to be regularly broken requiring a reboot.

                                                                                                                                                                          To top it off, the windows firewall experience is crazy, even if it has application level control, I still can’t understand why you would want something so confusing to configure.

                                                                                                                                                                          And I know for certain how bad windows can be.

                                                                                                                                                                          And I think you’re used to it, to the point that you don’t notice it. The fact that linux is bad in different ways doesn’t necessarily mean it’s as bad.

                                                                                                                                                                          or you’ll gamble when they will bann you

                                                                                                                                                                          Seems illegal. Maybe don’t give those companies money?

                                                                                                                                                                      3. 1

                                                                                                                                                                        All that comes obviously with the typical Microsoft problems. Like account bindings of your license, while 2FA may even make it harder to get your account back, because apparently not using your license account primarily on windows is weird and 2FA prevents them from “unlocking” your account again.

                                                                                                                                                                        The same goes for all the tracking, weird “Trophies” that are now present and stuff like that. But not having to tinker with GPU stuff (and getting a system that has no desktop anymore at 3AM) is very appealing.

                                                                                                                                                                        Can you recommend some?

                                                                                                                                                                        http://qttabbar.sourceforge.net/ works ok.
                                                                                                                                                                        Installed 2012 on windows 7, haven’t reinstalled my windows since, program still works except for 1-2 quirks.

                                                                                                                                                                1. 1

                                                                                                                                                                  Common values of k [lookahead in LL(k)] are 0 or 1, but you can make just about anything work fine as long as there’s a fixed upper bound to it. Nobody seems to outright say it, but if your value for k is more than 2 you’re probably making life complicated for yourself somehow.

                                                                                                                                                                  ISTR Python (at least in the 2.2/2.4 era) intentionally restricting itself to an LL(1) grammar specifically to keep syntactic complexity down. I haven’t followed the language for years, so maybe it’s relaxed since then.

                                                                                                                                                                  1. 2

                                                                                                                                                                    At some point (2.7/early 3?), some features were added that weren’t really doable in LL(1) rules, and they relied on some hacks to get them to work. Since 3.9, they’ve switched to using a PEG parser.

                                                                                                                                                                    Edit: The relevant PEP mentions examples of these constructs: https://www.python.org/dev/peps/pep-0617/#some-rules-are-not-actually-ll-1

                                                                                                                                                                    1. 1

                                                                                                                                                                      Thanks for the update.

                                                                                                                                                                  1. 9

                                                                                                                                                                    Call be a curmudgeon, but I only stream music long enough to know if I should purchase it for DRM from a site like Bandcamp. Spotify never appealed to me. I don’t like the idea of not having offline access, or paying for subscriptions, or listening to ads, or giving bands peanuts for streams. I’ll stick to mpd.

                                                                                                                                                                    1. 13

                                                                                                                                                                      I’m kind of a curmudgeon too, but some of these are incorrect.

                                                                                                                                                                      or paying for subscriptions

                                                                                                                                                                      Agree, but $120 a year for virtually unlimited music seems like a reasonable compromise to me.

                                                                                                                                                                      I don’t like the idea of not having offline access

                                                                                                                                                                      It has offline access with that subscription.

                                                                                                                                                                      or listening to ads

                                                                                                                                                                      There are no ads with that subscription.

                                                                                                                                                                      or giving bands peanuts for streams.

                                                                                                                                                                      I can’t comment on this. I don’t know how all that works.

                                                                                                                                                                      1. 12

                                                                                                                                                                        off-topic but indeed bands are given peanuts. Spotify’s goal was never the compensation of artists but the stop of piracy : https://www.businessinsider.fr/us/taylor-swift-doesnt-need-streaming-royalties-former-spotify-boss-said-2021-7

                                                                                                                                                                        1. 1

                                                                                                                                                                          However, the next stop is obviously subscriptions on individual artists or perhaps groups of artists. Substack for music.

                                                                                                                                                                          1. 3

                                                                                                                                                                            Since Bandcamp was already mentioned in this thread, just wanted to point out that they do indeed have this. As far as I’m aware, it functions kinda like Patreon, but built into the core download/streaming service.

                                                                                                                                                                        2. 4

                                                                                                                                                                          I know some of these concept conflict when free versus paid subscriber but I don’t see the value personally. That said, what does offline mean here? Is it DRM-free and yours to keep and and share, or is locked behind a single device, account, logevity of you being a subscriber?

                                                                                                                                                                          1. 6

                                                                                                                                                                            “Offline” in the sense that you can “cache” selected songs indefinitely for the Spotify client on your device.

                                                                                                                                                                            I don’t know how it’s actually physically stored on disk, but I imagine it’s just a binary blob that could also have some rudimentary encryption on it. At any rate it’s definitely not designed to make songs available outside Spotify.

                                                                                                                                                                            However, compared to “offline mode” in other subscription products (e.g. Netflix) I don’t think Spotify enforces any limitations on the duration or size of your offline library. Not sure what happens to your “offline” songs if your subscription expires though.

                                                                                                                                                                          2. 1

                                                                                                                                                                            I have Spotify Premium, and I was very annoyed to encounter ads recently on a podcast on Spotify. This may not be Spotify’s fault, but clearly Premium does not pay enough for this podcaster to abandon ads, and it is not accurate to say that there are no ads.

                                                                                                                                                                            If you say “no ads on music”, then that is accurate AFAIK.

                                                                                                                                                                          3. 4

                                                                                                                                                                            But if you buy an album and stream it, you’re giving the artists extra money. I only buy albums that I really love (which hasn’t really changed since pre-streaming/downloading days in high school), so when I stream music it’s either a bonus to a band or a few pennies extra to a band I normally wouldn’t listen to at all (or certainly wouldn’t buy their albums).

                                                                                                                                                                            Payouts depend on a number of factors, though. Even with my meager numbers, I’ve made about $20 on streams this past year.

                                                                                                                                                                            1. 2

                                                                                                                                                                              The client is what I’m paying for.

                                                                                                                                                                              Doing it all myself for many years was fine, recommendations are easy enough to come by … Cleaning metadata, getting album covers correct, organizing, that started to be a lot of work.

                                                                                                                                                                              At a certain point, Spotify was cheap enough (for me) that having a client with any music I wanted instant-on for any device (laptop, client, kid’s tablet), and an actual, real-life, working version of what uPNP always promised but never delivered… Worth it.

                                                                                                                                                                              Compensating artists is and always will be a separate topic (let’s talk about radio play baskets) and the equation still favors publishers pretty heavily (Bandcamp doesn’t get press for donating their profits some of the time because it’s no money.)

                                                                                                                                                                              1. 4

                                                                                                                                                                                And I choose to contribute to MusicBrainz and use Picard because I’d rather help open source

                                                                                                                                                                            1. 14

                                                                                                                                                                              And this, folks, is why I no longer run my own email server and likely never will.

                                                                                                                                                                              1. 3

                                                                                                                                                                                DMARC records aren’t just for self‐hosted email. You’ll want to set them up if you use a personal domain, even if your email is hosted by a cloud provider like Google or Microsoft.

                                                                                                                                                                                1. 8

                                                                                                                                                                                  … because you didn’t want to add a couple DNS records?

                                                                                                                                                                                  1. 13

                                                                                                                                                                                    If you pay an email provider to do the hard stuff, then yes, it’s just a matter of adding some DNS records. But, when running your own email server, you have to set up and maintain a DKIM and a DMARC suite yourself and make sure everything works together along with the SMTP server.

                                                                                                                                                                                    1. 6

                                                                                                                                                                                      Sure, you have to install opendkim and drop a line in your config, then generate the DNS records. Don’t need any software for DMARC, if you need reports they’re all human readable XML, but usually just set the DNS to enforce DKIM and done.

                                                                                                                                                                                      1. 2

                                                                                                                                                                                        To be fair I only use my domains as email slingshot if I can, nothing goes in, only out - which is good enough for 99% of my services. Different story for my work, because for real email you need a lot more these days. (push notifications for apple and android which are different systems and totally insecure, carddav, maybe caldav, webview, greylists, spamlist …). I can highly recommend this video from one CCC talk, though it’s probably only in german. But it comes with this nice overview.

                                                                                                                                                                                      2. 11

                                                                                                                                                                                        Because adding a couple of DNS records is a vast oversimplification and I just don’t want the deal with the actual complexity your comment is pretending doesn’t exist.

                                                                                                                                                                                        1. 11

                                                                                                                                                                                          I’m genuinely curious what else one could take as needed after reading this article? As I allowed in the sibling, yes, you also need to run opendkim I guess. The article very clearly talks about three kinds of DNS records and how to set them up. They say if you run an email service and care about DMARC reports you might want an aggregation tool, but admit right in the article that for self hosters you can just read the report (and probably don’t have to if you just have one mail server anyway)

                                                                                                                                                                                          1. 5

                                                                                                                                                                                            The article also says that they “had to do a lot of hard work and research to understand this problem”, so sure, it’s “just” a couple DNS records (and some config, and…), but the work involved to get there from nothing is clearly not non-existent. For comparison, I’m sure most programmers here have had the experience of spending all day understanding a problem just to end up committing 5 lines of code.

                                                                                                                                                                                            1. 5

                                                                                                                                                                                              Yes, sure, if you don’t know you need these or what the syntax is it will take time to learn. But now that this article exists, you could just read it and know more than everything you need. That’s why I’m curious why the reaction to the article giving the answer is to think the question is too hard.