Threads for Dunkhan

  1. 9

    Maybe people just pine for the days when you could read a blog with basic text and a couple of images without allowing the server owner to run code on your machine :-p

    1. 6

      Blog? You mean a periodic column in a local gazette, sent once every two months via the “snail mail”?

      1. 3

        Yo kid, JavaScript dates from 1995. Blogs weren’t a thing until 1999 or 2000.

        Also, not sure why “running code on my machine” is a bad thing, compared to “rendering text on my machine” or “displaying images on my machine.”

        1. 5

          I think it’s more the nostalgia for when you felt like you could really own your machine on your own terms vs “the customer is the product” feeling you get nowadays. That’s adjacent to “server runs code on my machine” but much bigger of a problem.

          1. 2

            Again, why is “loading a web page that runs some code to produce its experience” bad when “downloading a program that runs some code to produce its experience” is good? If anything, the web page is more in line with my-machine-my-terms because it’s much more strictly sandboxed.

            Me, I’m a lot more concerned with websites that do their interactive stuff using other people’s computers, because that so easily becomes surveillance.

            1. 3

              For me it’s because when I download a program to run some code it’s coming from apt-get, which means it’s been audited and would not even be available for download if it includes user-hostile features like advertising and spyware.

              For other people, I have no idea.

        2. 2

          Blog? Server? Images? LOLWUT

        1. 8

          Interesting project, I like the idea and the philosophy behind it.

          I could not get past the sense of cognitive jarring I had in the article due to the use of the term ‘solarpunk’ though. Every time I see this I get so annoyed by the buzzword I can’t focus on the actual content. When did everyone decide that the word ‘punk’ means ‘artistic style’? Did we vote on it? I feel like I never got a chance to point out that that is not what it means. Every time someone explains the term to me (including the one linked in the article) they compare it to cyberpunk and say ‘hey dystopias are bad, lets do solarpunk’. But the word punk means dystopian. Punk is a movement against authoritarianism and hypercapitalism and was always about members of the downtrodden rejecting the system and refusing to participate. Cyberpunk captured this perfectly and applied it to a mechanised and computerised future. Given that the world has become more authoritarian and more hypercapitalist since the height of cyberpunk literature I feel like the idea was spot on and very perceptive. Solarpunk can either be utopian or punk, but not both. Although I am strongly in favour of the general ideals behind the movement, I can’t help feeling that there is something naive and out of touch about the people pushing it, due to this choice of terminology. This is only strengthened by the amount of doublethink currently floating around in the entire green energy movement and a general inability to face the economic and political realities of our time.

          Sorry for the rant but I can’t get over this.

          1. 23

            As someone nearly old enough to have been a punk, and who was certainly in at the tail-end of the New Romantics, sorry, but your understanding of “punk” is completely off-base.

            Punk means a rejection of authority, as you correctly say, but there’s no requirement that it be dystopian. It means accessibility, it means do it yourself-ability. Not waiting for the man to do it for you. Solarpunk, hopepunk and cyberpunk are all on a spectrum of things which are punk.

            1. 3

              Also compare Cypherpunk

            2. 8

              “Cyberpunk” was called that because it was an SF trend that broke with the late 70s/early 80s “polished SF”, which was tired retreads of space opera, galactic empires, FTL fantasies etc, just like punk broke with the increasingly baroque “concept” music a few years earlier. Neuromancer was a literal shock, it depicted a future that felt just around the corner, and it focused on the hardscrabble losers who were trying to make it in that future. And it was written by an unknown newcomer[1] As someone who read it around the time it was published, it’s hard to overstate the impact it had on me and others.

              {Solar,cypher,steam,diesel}punk is, in my mind, just a marketing shorthand for a specific kind of work or attitude. I don’t really mind it, words evolve.

              [1] yes I know Gibson had been published before. But he was no Heinlein or Asimov.

              1. 11

                The word punk means dystopian

                Does it? Wikipedia says that punk subculture is “characterized by anti-establishment views, the promotion of individual freedom, DIY ethics”. All of these things are compatible with a kind of offline-first, low-tech, less-connected vision of computing. In a world where large corporations are pushing us toward hyperconnectivity that some people find to be psychologically dystopian, can’t pushback against that be all of anti-establishment, DIY, and utopian?

                1. 4

                  Punk is a movement against authoritarianism and hypercapitalism and was always about members of the downtrodden rejecting the system and refusing to participate.

                  That’s one interpretation, the one that brought us Crass and Minor Threat and such. But “punk” started out as a purely musical/artistic style that celebrated scuzziness and rough edges, viz. the MC5, the Stooges, and of course the Ramones. And the whole UK wing started out in Malcolm McLaren’s head as an extension of Situationism and Dada that very much engaged with the system while simultaneously trolling the fuck out of it. There have also always been fringes of punk that were authoritarian in themselves (parts of Oi! and all the Nazi punks the DKs were telling to fuck off.)

                  On the whole, “punk” refers to a street-level DIY approach that doesn’t so much refuse to participate in the System as forms parallel, smaller and more amenable systems of its own — just look at the number of record labels, zines and clubs that have always been a part of it.

                1. 2

                  I’m sad to see Firefox doing so poorly on this list. Maybe I should give Brave a spin…

                  1. 7

                    This is for basic firefox after clicking the link and installing. It is a relevant metric because that is how most people use browsers, but if you are privacy conscious and have the technical knowledge of the average lobste.rs user, then these stats are simply innaccurate. I have ublock origin (which is not available for brave, though they claim to already block the same things), privacy badger, and noscript on my browser and I imagine there are a lot more green ticks on my browser than on a basic vanilla firefox installation.

                    A lot of these browsers are probably configurable in this way and many of them probably support the same plugins. For people like us that bother to configure, a better metric would be a similar table but showing the situation when all possible privacy settings and all the major plugins are set up correctly. Such a table would show which browsers have flaws that can not be completely secured even with a careful setup.

                    1. 2

                      Librewolf seems to score the highest out of everything, but isn’t it just a fully-tricked-out Firefox with Mozilla telemetry yanked out? Not at all to diminish the work the Librewolf team are doing - they deserve credit for their accomplishment building and packaging a hardened Firefox.

                      1. 2

                        yeah, I’m using Firefox with uMatrix, so I guess it’ll be a lot more private than the list here. It’s difficult for me to understand what all these terms refer to (and I don’t much care to really dig in), but it leaves me with a vague feeling of “maybe vanilla Firefox doesn’t do enough stop these targeting methods, some of which uMatrix can’t even hook into”.

                        1. 3

                          I was wondering the same thing, but I’m also wondering how much this test is influenced by features rather than exploits. Is each green check an equal X% increase in privacy? Or does the whole class of referid’s that brave blocks only affect people who use those services and aren’t protected by another aspect. Also, when one browser has all checks and nothing else does, it’s hard to know if it’s an emerging threat or just a new feature being advertised by that single vendor.

                    1. 12

                      The whole Goggle office app suite is just really bad, and it’s amazing because I’m pretty sure it’s gotten worse over time?

                      1. 13

                        I thought there must be something better until I used Microsoft Office 365 and was much more annoyed.

                        I quite like docs and sheets. I guess it depends on your use case. I just need something simple with not a lot of features.

                        1. 7

                          I use Libreoffice and I can’t say I have any complaints. Mind you I am a programmer mostly and I spend less than 1% of my time at work working with office type documents. I have been subjected to the entire google suite by my current company though and I can definitely say I am not a fan.

                          I am not a fan of cloud services in general. I have a big fat self built pc tower, I have no need for thin clients. I see the value of collaborative tools but I really wish everything was just built on top of git.

                          1. 8

                            I value the real time collaboration.

                            Otherwise, I am a big fan of putting things into version control.

                            And honestly, the collaboration is 95% about comments which would be handled somewhat well with gitlab, github, another review tool…

                          2. 6

                            I find it depends on the tool. For presenter tools, I haven’t found anything better than PowerPoint. Early versions of Keynote had the advantage that they implemented only a subset of the features of PowerPoint (which included the minimum set required to make good presentations) but they gradually copied PowerPoint misfeatures (such as shrinking text if you type more, because slides with 1,000 words are obviously better than slides with 100 words). Google’s thing is awful, so is the Libre/OpenOffice one. PowerPoint’s Design Ideas, morph transitions, and SmartArt largely make up for its other shortcomings (such as awful drawing tools, inability to do syntax highlighting for code, and so on).

                            I like LaTeX beamer for technical presentations because the combination of the listings package and TikZ lets you make some very clear diagrams including code listing (e.g. control-flow graphs with code in each node) fairly easily and it’s also easy to have a single document that generates the slides and the handouts.

                            The only better thing I’ve found is Sozi, which will never gain widespread use because it’s too different. Sozi is inspired by Prezzi. It takes an SVG file as input and creates a presentation by panning, zooming, and (the feature that makes it much better than Prezzi) making layers appear and disappear.

                            When it comes to spreadsheets, they’re all pretty bad. Lotus had two spreadsheet products. The bad one, 123, and the good one, Improv. Everyone copied the bad one. Quantrix Modeller is the only surviving Improv clone and it is orders of magnitude better than any of the others. Apple’s Numbers is probably the best for tiny toy spreadsheets, none of them are appropriate for real work. Jupyter notebooks and Pandas are often a better tool than a spreadsheet for a lot of things people use a spreadsheet for. I’d love to see a good open-source Improv clone but the only one I know of is an unmaintained GNUstep app that is very unfinished.

                            Word processors are uniformly bad but I’m hugely biased against WYSIWYG. LyX is the only WYMIWYG editor that I’ve tried and it was less slower for me than typing semantic markup directly into a text editor. I’d really like to see a good visual editor for semantic markup. I’d also like to see something with a decent typesetting engine (in 2022, Word still uses a greedy algorithm for line breaking), such as SILE.

                            M365 does the collaborative editing pretty well. I can edit in the desktop app at the same time someone else edits on the web and we can see each other’s edits live. I can also turn on track changes and be able to review all of their changes before merging them. I wish it made versioning more explicit though.

                            1. 4

                              Docs and Sheets are pretty good for basic stuff. The only downside is I’ve never had them work offline correctly, even on a Chromebook.

                              1. 1

                                Strange.

                                Docs has definitely worked for me offline a couple of times on train rides but haven’t used that recently.

                              2. 2

                                I thought there must be something better until I used Microsoft Office 365 and was much more annoyed.

                                Why would trying those two options give you the impression that there isn’t anything better? LOL just kidding. Try Office 2003.

                                1. 3

                                  We probably have different use cases.

                                  I mostly have docs with a simple format that I want to collaborate on. E.g. comments.

                                  I believe that Office 2003 is very capable for traditional office tasks.

                                  1. 2

                                    I used Office in 2003 as a student. It might be fine for “office” tasks, but it was totally inadequate for students. Basic reference management just wasn’t there.

                                    1. 1

                                      Automated reference management seems like more trouble than it’s worth for the sorts of papers I wrote as a student! Until grad school at least, where everything is LaTeX.

                              3. 2

                                I’m needing to use Gsuite at my new gig and I think on the whole it’s better than the competition.

                                But why oh why does everything default to ‘/edit’ and is there an experienced Gsuite user that can point me to a setting or even Firefox plugin or something to fix that? I suppose I could use Tridactyl to rewrite. I very very often just want to read design docs, not accidentally mash some keys into them.

                                1. 2

                                  IDK, I think Google Docs is pretty good. Especially with the pageless mode. I just stick to the default styles and the only formatting I do is setting headings.

                                  It’s a bit slow but rock solid and the collaboration is top notch.

                                  1. 3

                                    Especially with the pageless mode

                                    It blows my mind that they just introduced this now. Been using this software for over a decade (not by choice) and have never once used it to produce anything that ended up on paper, but for over a decade, every document I worked on had page breaks in it that you couldn’t turn off.

                                    Absolutely boggling.

                                    1. 1

                                      I do agree. I think this is a huge step and hopefully they really take advantage of it. I tried Dropbox Paper which ironically is not so focused on actual paper but it was far too buggy and the collaboration was weak. I guess this will be enough to keep using Google Docs for my D&D notes and stuff.

                                1. 4

                                  This was my first ever computer. I have tried for years, mostly in vain, to get working copies of the freeware games I used to play on it. In some cases I even considered attempting to implement them myself.

                                  The apple games scene back then was all hobbyists and there were some really unique and interesting ones. None of this modern blood and explosions male power fantasy nonsense. I managed to get Glider and Scarab of Ra on a modern machine but I still really want Dark Castle and Scepters

                                  A sign of the times some of you may remember if you are old enough: freeware used to have a message with the programmer’s real home address and a note saying they would love it if you sent them a check in the mail.

                                  1. 6

                                    The problem? Many of the programming ligatures shown above are easily confused with existing Unicode symbols

                                    But existing Unicode symbols are already confusable with each other. The article also doesn’t explain why this is a problem.

                                    1. 9

                                      It seems to me that you are saying ‘there is already some ambiguity, this is a problem, why not add more ambiguity’. Seems like a question that answers itself.

                                      I agree with most of the comments here that no one should be telling other people how to render their code to their own eyes, and no ligature settings are encoded in the source meaning it a completely individual choice with no consequences for others.

                                      I also agree with the article that it is a terrible idea and you will never persuade me to allow ligatures anywhere near my IDE. I am also one of those strange people who hate wysiwyg editors of all kinds and avoid using them whenever possible so maybe I am just idiosyncratic. But I find when communicating with a computer, that precision is paramount and any ambiguity is dangerous.

                                      Having said that I am interested in the idea mentioned above “when it sees things like || or &&, replaces them with a bolder ligature version to make them stand out in code”. This is a use for ligatures that deserves consideration. Don’t change my symbols, but highlighting certain sequences could be very handy.

                                    1. 13

                                      I have never understood why KDE isn’t the default VM for any serious linux distribution. It feels so much more professional than anything else.

                                      Every time I see it, it makes me want to run Linux on the desktop again.

                                      1. 11

                                        I suspect because:

                                        1. IIRC Gnome has a lot more funding/momentum
                                        2. Plasma suffers from a lot of papercuts

                                        Regarding the second reason: Plasma overall looks pretty nice, at least at first glance. Once you start using it, you’ll notice a lot of UI inconsistencies (misaligned UI elements, having to go through 15 layers of settings, unclear icons, applications using radically different styles, etc) and rather lackluster KDE first-party applications. Gnome takes a radically different approach, and having used both (and using Gnome currently), I prefer Gnome precisely because of its consistency.

                                        1. 14

                                          There’s also a lot of politics involved. Most of the Linux desktop ecosystem is still driven by RedHat and they employ a lot of FSF evangalists. GNOME had GNU in its name and was originally created because of the FSF’s objections to Qt (prior to its license change) and that led to Red Hat preferring it.

                                          1. 6

                                            Plus GNOME and all its core components are truly community FLOSS projects, whereas Qt is a corporate, for-profit project which the Qt company happens to also provide as open source (but where you’re seriously railroaded into buying their ridiculously expensive licenses if you try to do anything serious with it or need stable releases).

                                            1. 7

                                              No one ever talks about cinnamon mint but I really like it. It looks exactly like all the screenshots in the article. Some of the customisation is maybe a little less convenient but I have always managed to get things looking exactly how I want them to and I am hardly a linux power user (recent windows refugee). Given that it seems the majority of arguments for plasma are that it is more user friendly and easier to customise, I would be interested to hear people’s opinions on cinnamon vs plasma. I had mobile plasma on my pinephone for a day or two but it was too glitchy and I ended up switching to Mobian. This is not a criticism of plasma, rather an admission that I have not really used it and have no first hand knowledge.

                                              1. 7

                                                I have not used either in anger but there’s also a C/C++ split with GTK vs Qt-based things. C is a truly horrible language for application development. Modern C++ is a mediocre language for application development. Both have some support for higher-level languages (GTK is used by Mono, for example, and GNOME also has Vala) but both are losing out to things like Electron that give you JavaScript / TypeScript environments and neither has anything like the developer base of iOS (Objective-C/Swift) or Android (Java/Kotlin).

                                                1. 4

                                                  As an unrelated sidenote, C is also a decent binding language, which matters when you are trying to use one of those frameworks from a language that is not C/C++. I wish Qt had a well-maintained C interface.

                                                  1. 8

                                                    I don’t really agree there. C is an adequate binding language if you are writing something like an image decoder, where your interface is expressed as functions that take buffers. It’s pretty terrible for something with a rich interface that needs to pass complex types across the boundary, which is the case for GUI toolkits.

                                                    For example, consider something like ICU’s UText interface, for exposing character storage representations for things like regex matching. It is a C interface that defines a structure that you must create with a bunch of callback functions defined as function pointers. One of the functions is required to set up a pointer in the struct to contain the next set of characters, either by copying from your internal representation into a static buffer in the structure or providing a pointer and setting the length to allow direct access to a contiguous run of characters in your internal representation. Automatically bridging this from a higher-level language is incredibly hard.

                                                    Or consider any of the delegate interfaces in OpenStep, which in C would be a void* and a struct containing a load of function pointers. Bridging this with a type-safe language is probably possible to do automatically but it loses type safety at the interfaces.

                                                    C interfaces don’t contain anything at the source level to describe memory ownership. If a function takes a char*, is that a pointer to a C string, or a pointer to a buffer whose length is specified elsewhere? Is the callee responsible for freeing it or the caller? With C++, smart pointers can convey this information and so binding generators can use it. Something like SWIG or Sol3 can get the ownership semantics right with no additional information.

                                                    Objective-C is a much better language for transparent bridging. Python, Ruby, and even Rust can transparently consume Objective-C APIs because it provides a single memory ownership model (everything is reference counted) and rich introspection functionality.

                                                    1. 2

                                                      Fair enough. I haven’t really been looking at Objective-C headers as a binding source. I agree that C’s interface is anemic. I was thinking more from an ABI perspective, ie. C++ interfaces tend to be more reliant on inlining, or have weird things like exceptions, as well as being totally compiler dependent. Note how for instance SWIG still generates a C interface with autogenerated glue. Also the full abi is defined in like 15 pages. So while it’s hard to make a high-level to high-level interface in C, you can manually compensate from the target language; with C++ you need a large amount of compiler support to even get started. Maybe Obj-C strikes a balance there, I haven’t really looked into it much. Can you call Obj-C from C? If not, it’s gonna be a hard sell to a project as a “secondary api” like llvm-c, because you don’t even get the larger group of C users.

                                                      1. 6

                                                        Also the full abi is defined in like 15

                                                        That’s a blessing and a curse. It’s also an exaggeration, the SysV x86-64 psABI is 68 pages. On x86-32 there are subtle differences in calling convention between Linux, FreeBSD, and macOS, for example, and Windows is completely different. Bitfields are implementation dependent and so you need to either avoid them or understand what the target compiler does. All of this adds up to embedding a lot of a C compiler in your other language, or just generating C and delegating to the C compiler.

                                                        Even ignoring all of that, the fact that the ABI is so small is a problem because it means that the ABI doesn’t fully specify everything. Yes, I can look at a C function definition and know from reading a 68-page doc how to lower the arguments for x86-64 but I don’t know anything about who owns the pointers. Subtyping relationships are not exposed.

                                                        To give a trivial example from POSIX, the connect function takes three arguments: int, const struct sockaddr, and socklen_t. Nothing in this tells me:

                                                        • That the second argument is never actually a pointer to a sockaddr structure, it is a pointer to some other structure that starts with the same fields as the sockaddr.
                                                        • That the third argument must be the size of the real structure that I point to with the second argument.
                                                        • That the second parameter is not captured and I remain responsible for freeing it (you could assume this from const and you’d be right most of the time).
                                                        • That the first parameter is not an arbitrary integer, it must be a file descriptor (and for it to actually work, that file descriptor must be a socket).

                                                        I need to know all of these things to be able to bridge from another language. The C header tells me none of these.

                                                        Apple worked around a lot of these problems with CoreFoundation by adding annotations that basically expose the Objective-C object and ownership model into C. Both Microsoft and Apple worked around it for their core libraries by providing IDL files (in completely different formats) that describe their interfaces.

                                                        So while it’s hard to make a high-level to high-level interface in C, you can manually compensate from the target language; with C++ you need a large amount of compiler support to even get started

                                                        You do for C as well. Parsing C header files and extracting enough information to be able to reliably expose everything with anything less than a full C compiler is not going to work and every tool that I’ve seen that tries fails in exciting ways. But that isn’t enough.

                                                        In contrast, embedding something like clang’s libraries is sufficient for bridging a modern C++ or Objective-C codebase because all of the information that you need is present in the header files.

                                                        Can you call Obj-C from C?

                                                        Yes. Objective-C methods are invoked by calling objc_msgSend with the receiver as the first parameter and the selector as the second. The Objective-C runtime provides an API for looking up selectors from their name. Many years ago, I wrote a trivial libClang tool that took an Objective-C header and emitted a C header that exposed all of the methods as static inline functions. I can’t remember what I did with it but it was on the order of 100 lines of code, so rewriting it would be pretty trivial.

                                                        If not, it’s gonna be a hard sell to a project as a “secondary api” like llvm-c, because you don’t even get the larger group of C users.

                                                        There are fewer C programmers than C++ programmers these days. This is one of the problems that projects like Linux and FreeBSD have attracting new talent: the intersection between good programmers and people who choose C over C++ is rapidly shrinking and includes very few people under the age of 35.

                                                        LLVM has llvm-c for two reasons. The most important one is that it’s a stable ABI. LLVM does not have a policy of providing a stable ABI for any of the C++ classes. This is a design decision that is completely orthogonal to the language. There’s been discussion about making llvm-c a thin (machine-generated) wrapper around a stable C++ interface to core LLVM functionality. That’s probably the direction that the project will go eventually, once someone bothers to do the work.

                                                        1. 1

                                                          I’ve been discounting memory management because it can be foisted off onto the user. On the other hand something like register or memory passing or how x86-64 uses SSE regs for doubles cannot be done by the user unless you want to manually generate calling code in memory.

                                                          You do for C as well. Parsing C header files and extracting enough information to be able to reliably expose everything with anything less than a full C compiler is not going to work and every tool that I’ve seen that tries fails in exciting ways. But that isn’t enough.

                                                          Sure but there again you can foist things off onto the user. For instance, D only recently gained a proper C header frontend; until now it got along fine enough by just manually declaring extern(C) functions. I believe JNI and CFFI do the same. It’s annoying but it’s possible, which is more than can be said for many C++ bindings.

                                                          There are fewer C programmers than C++ programmers these days.

                                                          I meant C as a secondary API, ie. C++ as primary then C as auxiliary, as opposed to Objective-C as auxiliary.

                                                          Yes. Objective-C methods are invoked by calling objc_msgSend with the receiver as the first parameter and the selector as the second. The Objective-C runtime provides an API for looking up selectors from their name.

                                                          I don’t know the requirements for deploying with the ObjC runtime. Still, nice!

                                                          1. 2

                                                            I’ve been discounting memory management because it can be foisted off onto the user.

                                                            That’s true only if you’re bridging two languages with manual memory management, which is not the common case for interop. If you are exposing a library to a language with a GC, automatic reference counting, or ownership-based memory management then you need to handle this. Or you end up with an interop layer that everyone hates (e.g JNI).

                                                            Sure but there again you can foist things off onto the user. For instance, D only recently gained a proper C header frontend; until now it got along fine enough by just manually declaring extern(C) functions. I believe JNI and CFFI do the same. It’s annoying but it’s possible, which is more than can be said for many C++ bindings.

                                                            Which works for simple cases. For some counterexamples, C has _Complex types, which typically follow different rules for argument passing and returning to structures of the same layout (though they sometimes don’t, depending on the ABI). Most languages don’t adopt this stupidity and so you need to make sure that your custom C parser can express some C complex type. The same applies if you want to define bitfields in C structures in another language, or if the C structure that you’re exposing uses packed pagmas or attributes, uses _Alignas, and so on. There’s a phenomenal amount of complexity that you can punt on if you want to handle only trivial cases, but then you’re using a very restricted subset of C.

                                                            JNI doesn’t allow calling arbitrary C functions, it requires that you write C functions that implement native methods on a Java object. This scopes the problem such that the JVM needs to be able to handle calling only C functions that use Java types (8 to 64-bit signed integers or pointers) as arguments return values. These can then call back into the JVM to access fields, call methods, allocate objects, and so on. If you want to return a C structure into Java then you must create a buffer to store it and an object that owns the buffer and exposes native methods for accessing the fields. It’s pretty easy to use JNI to expose Java classes into other languages that don’t run in the JVM, it’s much harder to use it to expose C libraries into Java (and that’s why everyone who uses it hates it).

                                                            I meant C as a secondary API, ie. C++ as primary then C as auxiliary, as opposed to Objective-C as auxiliary.

                                                            If you have a stable C++ API, then bridging C++ provides you more semantic information for your compat layer than a C wrapper around the stable C++ API would. Take a look at Sol3 for an example: it can expose C++ objects directly into Lua, with correct memory management, without any C wrappers. C++ libraries often conflate a C API with an ABI-stable API but this is not necessary.

                                                            I don’t know the requirements for deploying with the ObjC runtime. Still, nice!

                                                            The requirements for the runtime are pretty small but for it to be useful you want a decent implementation of at least the Foundation framework, which provides types like arrays, dictionaries, and strings. That’s a bit harder.

                                                            1. 2

                                                              I don’t know. I feel like you massively overvalue the importance of memory management and undervalue the importance of binding generation and calling convention compatibility. For instance, as far as I can tell sol3 requires manual binding of function pointers to create method calls that can be called from Lua. From where I’m standing, I don’t actually save anything effort-wise over a C binding here!

                                                              Fair enough, I didn’t know that about JNI. But that’s actually a good example of the notion that a binding language needs to have a good semantic match with its target. C has an adequate to poor semantic match on memory management and any sort of higher-kinded functions, but it’s decent on data structure expressiveness and very terse, and it’s very easy to get basic support working quick. C++ has mangling, a not just platform-dependent but compiler-dependent ABI with lots of details, headers that often use advanced C++ features (I’ve literally never seen a C API that uses _Complex - or bitfields) and still probably requires memory management glue.

                                                              Remember that the context here was Qt vs GTK! Getting GTK bound to any vaguely C-like language (let’s say any language with a libc binding) to the point where you can make calls is very easy - no matter what your memory management is. At most it makes it a bit awkward. Getting Qt bound is an epic odyssey.

                                                              1. 4

                                                                I feel like you massively overvalue the importance of memory management and undervalue the importance of binding generation and calling convention compatibility

                                                                I’m coming from the perspective of having written interop layers for a few languages at this point. Calling conventions are by far the easiest thing to do. In increasing levels of difficulty, the problems are:

                                                                • Exposing functions.
                                                                • Exposing plain data types.
                                                                • Bridging string and array / dictionary types.
                                                                • Correctly managing memory between two languages.
                                                                • Exposing general-purpose rich types (things with methods that you can call).
                                                                • Exposing rich types in both directions.

                                                                C only seems easy because C<->C interop requires a huge amount of boilerplate and so C programmers have a very low bar for what ‘transparent interoperability’ means.

                                                                For instance, as far as I can tell sol3 requires manual binding of function pointers to create method calls that can be called from Lua. From where I’m standing, I don’t actually save anything effort-wise over a C binding here!

                                                                It does, because it’s an EDSL in C++, but that code could be mechanically generated (and if reflection makes it into C++23 then it can be generated from within C++). If you pass a C++ shared_ptr<T> to Sol3, then it will correctly deallocate the underlying object once neither Lua nor C++ reference it any longer. This is incredibly important for any non-trivial binding.

                                                                Remember that the context here was Qt vs GTK! Getting GTK bound to any vaguely C-like language (let’s say any language with a libc binding) to the point where you can make calls is very easy - no matter what your memory management is.

                                                                Most languages are not ‘vaguely C-like’. If you want to use GTK from Python, or C#, how do you manage memory? Someone has had to write bindings that do the right thing for you. From my vague memory, it uses GObject, which uses C macros to define objects and to manage reference counts. This means that whoever manages the binding layer has had to interop with C macros (which are far harder to get to work than C++ templates - we have templates working for the Verona C++ interop layer but we’re punting on C macros for now and will support a limited subset of them later). This typically requires hand writing code at the boundary, which is something that you really want to avoid.

                                                                Last time I looked at Qt, they were in the process of moving from their own smart pointer types to C++11 ones but in both cases as long as your binding layers knows how to handle smart pointers (which really just means knowing how to instantiate C++ templates and call methods on them) then it’s trivial. If you’re a tool like SWIG, then you just spit out C++ code and make the C++ compiler handle all of this for you. If you’re something more like the Verona interop layer then you embed a C++ parser / AST generator / codegen path and make it do it for you.

                                                                1. 1

                                                                  I’m coming from the perspective of having written interop layers for a few languages at this point.

                                                                  Yeah … same? I think it’s just that I tend to be obsessed with variations on C-like languages, which colors my perception. You sound like you’re a lot more broad in your interests.

                                                                  C only seems easy because C<->C interop requires a huge amount of boilerplate and so C programmers have a very low bar for what ‘transparent interoperability’ means.

                                                                  I don’t agree. Memory management is annoying, sure, and having to look up string ownership for every call gets old quick, but for a stateful UI like GTK you can usually even just let it leak. I mean, how many widgets does a typical app need? Grab heaptrack, identify a few sites of concern and jam frees in there, and move on with your life. It’s possible to do it shittily easily, and I value that a lot.

                                                                  If you’re a tool like SWIG, then you just spit out C++ code and make the C++ compiler handle all of this for you.

                                                                  Hey, no shade on SWIG. SWIG is great, I love it.

                                                                  From my vague memory, it uses GObject, which uses C macros to define objects and to manage reference counts. This means that whoever manages the binding layer has had to interop with C macros

                                                                  Nah, it’s really only a few macros, and they do fairly straightforward things. Last time I did GTK, I just wrote those by hand. I tend to make binders that do 90% of the work - the easy parts - and not worry about the rest, because that conserves total effort. With C that works out because functions usually take structs by pointer, so if there’s a weird struct that doesn’t generate I can just define a close-enough facsimile and cast it, and if there’s a weird function I define it. With C++ everything is much more interdependent - if you have a bug in the vtable layout, there’s nothing you can do except fix it.

                                                                  When I’ll eventually want Qt in my current language, I’ll probably turn to SWIG. It’s what I used in Jerboa. But it’s an extra step to kludge in, that I don’t particularly look forward to. If I just want a quick UI with minimal effort, GTK is the only game in town.

                                                                  edit: For instance, I just kludged this together in half an hour: https://gist.github.com/FeepingCreature/6fa2d3b47c6eb30a55846e18f7e0e84c This is the first time I’ve tried touching the GTK headers on this language. It’s exposed issues in the compiler, it’s full of hacks, and until the last second I didn’t really expect it to work. But stupid as it is, it does work. I’m not gonna do Qt for comparison, because I want to go to bed soon, but I feel it’s not gonna be half an hour. Now to be fair, I already had a C header importer around, and that’s a lot of time sunk into that that C++ doesn’t get. But also, I would not have attempted to write even a kludgy C++ header parser, because I know that I would have given up halfway through. And most importantly - that kludgy C header importer was already practically useful after a good day of work.

                                                                  edit: If there’s a spectrum of “if it’s worth doing, it’s worth doing properly” to “minimal distance of zero to cool thing”, I’m heavily on the right side. I think that might be the personality difference at play here? For me, a binding generator is purely a tool to get at a juicy library that I want to use. There’s no love of the craft lost there.

                                                  2. 1

                                                    So does plasma support Electron/Swift/Java/Kotlin? I know electron applications run on my desktop so I assume you mean directly as part of the desktop. If so that is pretty cool. Please forgive my ignorance, desktop UI frameworks are way outside my usual area of expertise.

                                                  3. 2

                                                    I only minimally use KDE on the computers at my university’s CS department, but I’ve been using cinnamon for almost four years now. I think that Plasma wins in the customizable aspect. There is just so many things that can be adjusted.

                                                    Cinnamon on the other hand feels far more polished, with fewer options for customization. I personally use cinnamon with Arch, but when I occasionally use Mint, the full desktop with all of mint’s applications is very cohesive and well thought out, though not without flaws.

                                                    I sometimes think that cinnamon isn’t evangelized as frequently because it’s well enough designed that it sort of fades into the background while using it

                                              2. 3

                                                I’ve used Cinnamon for years, but it inevitably breaks (or I break it). I recently looked into the alternatives again, and settled on KDE because it looked nice, it and Gnome are the two major players so things are more likely to Just Work, and it even had some functionality I wanted that Gnome didn’t. I hopped back to Cinnamon within the week, because yeah, the papercuts. Plasma looks beautiful in screenshots, and has a lot of nice-sounding features, but the moment you actually use it, you bang your face into something that shouldn’t be there. It reminded me of first trying KDE in the mid-2000s, and it was rather disappointing to feel they’ve been spinning in circles in a lot of ways. I guess that isn’t exactly uncommon for the Linux desktop though…

                                                1. 3

                                                  I agree with your assessment of Plasma and GNOME (Shell). Plasma mostly looks fine, but every single time I use it–without fail–I find some buggy behavior almost immediately, and it’s always worse than just having misaligned labels on some UI elements, too. It’s more like I’ll check a setting checkbox and then go back and it’s unchecked, or I’ll try to put a panel on one or another edge of the screen and it’ll cause the main menu to open on the opposite edge like it looped around, or any other number of things that just don’t actually work right. Even after they caved on allowing a single-key desktop shortcut (i.e., using the Super key to open the main menu), it didn’t work right when I would plug/unplug my laptop from my desk monitors because of some weirdness around the lifecycle of the panels and the main menu button; I’d only be able to have the Super key work as a shortcut if it was plugged in or if it was not, but not both. That one was a little while ago, so maybe it’s better now.

                                                  Ironically, Plasma seems to be all about “configuration” and having 10,000 knobs to tweak, but the only way it actually works reasonably well for me is if you don’t touch anything and use it exactly how the devs are dog-fooding it.

                                                  The GNOME guys had the right idea when it came to stripping options, IMO. It’s an unpopular opinion in some corners, but I think it’s just smart to admit when you don’t have the resources to maintain a high bar of quality AND configurability. You have to pick one, and I think GNOME picked the right one.

                                                2. 5

                                                  I have never understood why KDE isn’t the default VM for any serious linux distribution.

                                                  Me neither, but I’m glad to hear it is the default desktop experience on the recently released Steam Deck.

                                                  1. 3

                                                    Do SUSE/OpenSUSE not count as serious Linux distributions anymore?

                                                    It’s also the default for Manjaro as shipped by Pine64. (I think Manjaro overall has several variants… the one Pine64 ships is KDE-based.)

                                                    Garuda is also a serious Linux distribution, and KDE is their flagship.

                                                    1. 1

                                                      I tried to use Plasma multiple times on Arch Linux but every time I tried it turned out to be too unstable. The most annoying bug I remember was that kRunner often crashed after entering some letters, taking down the whole desktop session with it. In the end I stuck with Gnome because it was stable and looked consistent. I do like the concept of Plasma but I will avoid it on any machine I do serious work with.

                                                    1. 12

                                                      Nobody feels there’s a problem with the massive centralisation occuring at Github and that this massive centralisation is one of the main reason for a huge and emblematic project to switch to Github?

                                                      1. 10

                                                        I do have a problem with centralization to GitHub. It seems like a great idea until GitHub goes down and now we’ve increased fragility.

                                                        On the other hand, I understand why a project of this size would want to leverage GitHub. Free cost of administration of all the backend systems has to be quite appealing. I also assume (and the article alludes to this) that the support given to them directly by GitHub has reinforced that this is the right decision for the Python team.

                                                        I’d love to see more federation and decentralization, but without being forced to interoperate by law I don’t see GitHub participating. At this point their network effect is quite large and just like the other large tech platforms they have no incentive to federate.

                                                        1. 7

                                                          I have a problem with it. I also have a problem with the bugtracker on github being basically inferior to every other major bugtracker available, including the one currently used by the python community.

                                                          I complain about the github bugtracker a lot and I should add the caveat that they are improving it. There have been a lot of important improvements and if development continues at the current rate it might even end up competing with the major trackers within the next 2-3 years.

                                                          They do not appear to be working on, or have any intention of working on the centralisation issue though, except to make it worse.

                                                          1. 4

                                                            “Centralization” is the natural and expected end result of all attempts to produce “decentralized” systems. The number of people who are willing to put up with the friction and difficulty of maintaining and interacting with truly decentralized systems, on an ongoing basis, is simply too small compared to those who will opt for the convenience of letting it be someone else’s problem.

                                                          1. 2

                                                            When I think about all the geniuses that work for free to make minecraft great, from hackers like this, to modders, to the people who build fantastic machines and worlds in game, to all those that host and curate great communities, I often wonder what it would take to get them to all migrate to another game. Even if it were just a 1-1 clone of minecraft.

                                                            Minecraft is badly implemented an inefficient, there are numerous mods and hacks that optimise parts of the code. It runs badly on linux if at all. Mojang had a reputation for breaking things for modders and not improving the things that modders need to make things work. I am not sure if Microsoft have improved in that regard as I no longer follow the situation, but I would be surprised. The biggest reason however is that a private corporation is making boatloads of cash off all this free work, and the fans of the people doing the work are paying for it, without all those that really make the game great getting any recognition or recompense.

                                                            Open source minecraft clones exist, I don’t know if they are any good, but at this time it does not appear that there has been any major migration away from proprietary minecraft. So what would it take? Is anyone still involved in the community able to shed light on this?

                                                            1. 1

                                                              What would the modders gain from switching to a clone though? Unless it was API (AMI?) compatible and had all the features that Minecraft already has I don’t see people making the switch. As you said it is the mods and the custom servers that make Minecraft fun to play, and for how better optimized and more hackable an open source clone might be, it would take a lot of effort to make the switch appealing to the community.

                                                              Furthermore, this wouldn’t directly tackle the problem of the modders getting paid for their work; it’s one thing to say that there shouldn’t be a corporation making money off of a modding community as big as Minecraft and another to actually get modders paid for their thankless work. A lot of modders (try to) get funding independently via platforms like Patreon. I don’t think think there’s any game that has a system for getting modders paid for their work somewhat integrated into it that’s worth following. You could look at Roblox for something that kinda works, but as some recent investigations have shown it’s still a terrible system that preys on children and only lets the corporation backing it and a few of the most successful modders reap any of its benefits.

                                                              It’s not like Mojang/Microsoft have abandoned the game either; it’s still getting major updates, so obviously people would want to play the version that has the new blocks and animals and improved cave generation systems, so any implementer of an open source clone would constantly be playing catch-up with the original version. I could see it happening if Mojang abandoned the Java version, but not otherwise.

                                                              (As for the community insight, I do occasionally play Minecraft with friends but I’m not involved in the community further than that.)

                                                              1. 4

                                                                Short term: Ownership of the code (community not individual), the ability for their users to use their product for free, better optimised code, the ability to define the API, cross platform support, no proprietary copyright, an end to surprise breaking change, escape from user and modder domestication

                                                                Longer term: A much more mod friendly API, systems for dealing with compatibility issues and conflicts, many other technical advantages.

                                                                (Disclaimer: The information in the following paragraph is based on an assumption that things haven’t improved things recently, I do not have up to date information) Microsoft releases updates, but much as Mojang did in the past they waste a lot of effort on content updates for vanilla players as those are a huge part of the customer base. They will add something like a new animal, despite that animal being available in a mod for years, a mod which also contains dozens of other animals. They rarely if ever release improvements for modders. They often make changes to the api which require modders to scramble to fix things. They do fix bugs which is welcome, but often bugs that affect modders are ignored in favour, again, of vanilla players.

                                                                Lastly, it is not about modders getting paid. Modders work fro free, they understand this when they start out and accept it. It is a labour of love. The issue is that Microsoft profits from their work. Microsoft’s only real contribution to that work is buying the IP (and the community). The minecraft modding community is vast, prolific and talented, they could upgrade a random independant project to feature parity with vanilla minecraft in a matter of days if they all migrated together.

                                                                Edit: I felt uncomfortable speaking for a community I am not really active in, so I went to the modders discord and put this to them. It appears a lot is wrong in what I have said. The main disagreements were that Mojang was never as bad as I implied, that Microsoft is making a lot of welcome improvements to the codebase, and that some modders do in fact get paid. In general people seem pretty happy with the current state of affairs and have little interest in migration.

                                                                At first people were very defensive, but after some discussion some shortcomings of the current situation were admitted to. There are issues with the modloaders and server plugin frameworks fragmenting (these are not officially supported and purely built by modders), issues with some features being hardcoded in a mod-unfriendly way, some resistance to implementing fixes and improvements requested by the community, and also some technical problems that went way over my head. They also played down the value of modders to the minecraft community at large. I am not sure I agree on this last point, there may be a sense of false modesty here.

                                                                Regardless, please take my uninformed views with a large pinch of salt.

                                                                1. 2

                                                                  Thanks a lot for reaching out to the mod community and requesting their views, I don’t play Minecraft but the interaction between big companies and people how do work for them for “love” has always interested me.

                                                            1. 3

                                                              Wouldn’t it be better just to change how you represent time? If you set it to be integer milliseconds since program start you will always have perfect precision on the time, and if you need to convert it to a floating point number for a calculation you an deal with precision at each calculation as necessary.

                                                              1. 3

                                                                Yes, but this is for contexts where you can’t do that, e.g. calling trigonometric functions on the GPU.

                                                                1. 2

                                                                  GPUs optimize for 32-bit floats, and many kinds of floating point data may get involved in drawing (time, orientation, color, hit points…). I’ve never gotten as far as shipping a game, but it’s natural to default to 32-bit floats for just about everything that’s not an integer, and then you don’t have to convert, whether per drawcall, per vertex, or per pixel. With VRR you can’t even depend on the display interval staying uniform anymore. So I see the benefit in having a general solution even if it is a “trick.”

                                                                1. 1

                                                                  This sounds amazing. When I was younger I wanted to be a game dev and although I never went to any game dev schools, I gained some experience of them afterwards. I eventually quit the game industry because I could not reconcile the business practices with how I wanted to live my life. It is worth noting here that it was not just about the bad working conditions. The whole business of milking old re-sold IP till it is wrung dry, pandering to the lowest common denominator in the audience, interference in the creative process from people in marketing and management who had no-ones best interests at heart, the people at the top being the least competent but having the richest parents… the same issues that plague every other industry seem to be magnified in this one.

                                                                  I moved on to hobby projects and normal non-game software development. I am still somewhat sad about the fact that I have to spend so much of my time working for money and don’t get to realise all the ideas in my head.

                                                                  For me this project sounds like a wonderful opportunity simply to collaborate with other people while learning how to make cool things. I wish it had been around when I needed it.

                                                                  I am not telling any current or soon-to-be game design student to “get the hell out of there before the corruption overtakes you!

                                                                  I am. I know it sounds harsh but I don’t think it is worth living like that, From now on I will be able to change that to ‘maybe check out open gamedev school’ so that’s an improvement. In the end all I want is to be able to do creative programming work and get paid a living wage for it. Well… maybe one day.

                                                                  1. 2

                                                                    Nice paper, nothing hugely surprising in the results but it is nice too have a solid basis for evaluating this sort of thing. I have always been a fan of peer moderation systems and would be interested to see a similar study done on something like stackoverflow or even lobste.rs and ideally a comparison of data across the two systems.

                                                                    1. 5

                                                                      This problem applies to all human communication, as does the advice given. There is one piece of advice which is missing however:

                                                                      Define your terms: In programming most terminology is more clearly defined than in many other situations, but there is still plenty of room for ambiguity. When you say the software is laggy, do you mean there is a network latency problem, a rendering frame-rate problem or an interaction responsiveness problem? When you say back-end do you mean the non-UI client logic or the server logic? When you say crash, do you mean it hangs, crashes to desktop, resets itself to an earlier state, shows an exception report? When you say freedom do you mean (insert any samples of the 7 billion existing definitions of freedom here).

                                                                      This is partly covered by “Make the sure ‘who’ and what are ‘clear’”, but not completely. There may be ambiguous terms outside of those areas.

                                                                      1. 2

                                                                        When I enable javascript on this page my cpu usage spikes to 100%, my whole system starts choking. After a few seconds the cpu use of firefox drops to under 50% but the page still stutters when scrolling and other pages are affected too.

                                                                        I have a powerful machine. Its a bit ironic considering the word ‘minimal’ in the title. I could not read the article so I don’t know if minimal was referring to browser performance or something else.

                                                                        1. 1

                                                                          Indeed :O

                                                                          I am not sure what is happening. Maybe it’s all those videos in autoplay with WebM by default.

                                                                          I have changed the videos from autoplay to controls. This way, you can play on-demand. Hopefully, this will make things leaner and less CPU intensive.

                                                                          1. 3

                                                                            That seems to have fixed it. Usually firefox blocks autoplaying videos but I had that turned off because I was testing something for work.

                                                                        1. 26

                                                                          I think the mere existence of ‘Tesla Bros’ should give everyone else in the world pause. When someone is so fanatical about something that usually indicates something is not right.

                                                                          Of course trying to convince people that luxury cars are the solution to pollution is so suspect in itself that I wonder whether anyone is even paying attention anymore.

                                                                          On a more on-topic note: It seems to me that if you are a UI designer and you get given an existing UI to work on, you have to make your mark. Otherwise people will say “what do we even need you for?”. Or at least that is how the designers seem to instinctively feel. As a programmer I get the distinct impression that the UI designers want to redesign the UI every few months in a kind of slash and burn, rebuild from scratch way. This results in a lot of unnecessary change, and often change that is more drastic than is really appropriate. I can imagine if you are hired to work on an existing UI and you say ‘the existing UI is fine, I didn’t change anything’ that it might be a bit awkward in the next meeting. On the other hand changing UIs, even for the better, has a huge cost for the entire user base and finding out that it needs no changes should be a welcome relief to everyone involved.

                                                                          I think part of the issue is that aesthetics are so hard to define, subjective and constantly changing. I wish this fact would be acknowledged a bit more though. It’s fine to change something if you think it is an improvement, but we shouldn’t pretend that aesthetic improvements are objectively or obviously better.

                                                                          1. 4

                                                                            One thing I want to do when I have a lot of free time is a blind test in which I would present participants with a list of ten excerpts from either articles on UX and UI design, or high-ranking posts on /r/stonerphilosophy, and see if they can figure out which one’s from where.

                                                                            If you have two buttons, there is a third ‘object’ created, the decision a user must make on which button to tap.

                                                                            I’m, like, way more partial to the Pythagorean approach that there are in fact four objects created by two buttons, since a single button already creates the decision of whether to press it or not. Duuuuude.

                                                                            1. 1

                                                                              Of course trying to convince people that luxury cars are the solution to pollution is so suspect in itself that I wonder whether anyone is even paying attention anymore.

                                                                              haha, I loved that quip. Thanks for making my morning!

                                                                            1. 10

                                                                              Thanks for (almost) leading with

                                                                              Nor are we talking about what comes to mind for engineers accustomed to classical cryptography when you say Hybrid.

                                                                              (Such engineers typically envision some combination of asymmetric key encapsulation with symmetric encryption; because too many people encrypt with RSA directly and the sane approach is often described as a Hybrid Cryptosystem in the literature.)

                                                                              Because that’s what I was expecting when I read the headline.

                                                                              In a similar vein, when the post uses “PQ” to mean post-Quantum, it never states that PQ means post-quantum, and that sent me casting around for a few minutes. It might be good to call that out, too.

                                                                              I like this analysis:

                                                                              It’s very tempting to look at this and think, “Wow, that’s a lot of work for something that only helps in 12.5% of possible outcomes!” Uri didn’t explicitly state this assumption, and he might not even believe that, but it is a cognitive trap that emerges in the structure of his argument, so watch your step.

                                                                              Second, for many candidate algorithms, we’re already in scenario 6 that Uri outlined! It’s not some hypothetical future, it’s the present state of affairs.

                                                                              and I think it’d be worth pointing out that “hybrid is useless but does not further compromise security” is really the default for the other scenarios. Because some of them read like hybrid is a liability in that regard too.

                                                                              1. 4

                                                                                In a similar vein, when the post uses “PQ” to mean post-Quantum, it never states that PQ means post-quantum, and that sent me casting around for a few minutes. It might be good to call that out, too.

                                                                                Good point, thanks! I’ll fix that posthaste

                                                                                1. 1

                                                                                  Same for me with CRQC

                                                                                2. 3

                                                                                  Clearly PQ refers to Perceptual Quantiser, gotta make sure your crypto works in HDR these days!

                                                                                  1. 3

                                                                                    Ohhh so that’s what all the talk about perceptual hashing is

                                                                                1. 26

                                                                                  I have a “no language flamewars” rule but I’d like to make an exception because this reads a lot like a similar “defense” I wrote about C++ way, way back (2007-ish?) and which I have repented in the meantime. Not due to more exposure to language theory but due to more practical exposure to large codebases, in C++ and other languages.

                                                                                  I think the author takes some points for granted, and they’re not universally correct. Some of these include:

                                                                                  1. That language complexity and application complexity are not just separable, but that more of the former means less of the latter.

                                                                                  To illustrate a counterpoint (“show the door before showing the key”) with a problem that I’ve been banging my head against for weeks: safely sharing data between IRQ and non-IRQ contexts in Rust is… not fun. (see https://github.com/rust-embedded/wg/issues/294 or https://github.com/rust-embedded/not-yet-awesome-embedded-rust#sharing-data-with-interrupts for some background discussion). Even the best attempts at doing it idiomatically have been absolutely horrifying so far. Debugging any code that uses them is mind-boggling, and exposes you to virtually every nuance of the underlying ownership and concurrency model of the language, many (most?) of which either don’t apply at all, or don’t quite apply the way the underlying model thinks they might. Most of this complexity stems from the fact that you’re not just building abstractions on top of the ones that the hardware provides, you’re also building on top of (or reconciling them with) the ones that the language provides, and the other abstractions built in your code. You’re effectively building on three mutually-incompatible foundations. Kind of like building a bridge on pontoons that are themselves floating in large tanks placed on other pontoons on a river.

                                                                                  More generally: except for the most trivial of bugs, which stem only from oversights (forgot to call an init function, forgot to update some piece of state, whatever), most bugs are a mismatch between your understanding of what the code does, and what it actually does. Understanding the language, in all of its nuances, is a pre-condition to figuring out the latter. You rarely get to reconcile the two without understanding the language, except by sheer luck.

                                                                                  1. That you can build incrementally more powerful languages only by incrementally adding more features

                                                                                  There’s a tongue-in-cheek law that I’m very fond of – Mo’s Law of Evolutionary Development – which says that you can’t get to the moon by climbing successively taller trees.

                                                                                  A hypothetical language like the one the author considers – exactly like Python, but without classes – would obviously suck because a lot of the code that is currently very concise would get pretty verbose. Presumably, that’s why they added classes to Python in the first place. However, that doesn’t mean a better abstraction mechanism, which could, for example, unify classes, dictionaries, and enums (currently provided through a standard library feature) wouldn’t have been possible. That would get you the same concise code, only with less complexity.

                                                                                  It’s probably not the best example – maybe someone who’s more familiar with Python’s growing pains could figure out a better one – but I think it carries the point across: not all language features are created equal. Some of them are really good building blocks for higher-level abstractions, and you can express many things with them. Others not so much. Having more of them doesn’t necessarily make a language better at expressing things than another one.

                                                                                  Edit: Or, to put it another way, there is such a thing as an “evolutionary leap”. One sufficiently powerful mechanism can make several less powerful mechanisms obsolete. Templates and template functions, for example, made a whole class of C hacks (dispatch tables, void pointer magic) completely unnecessary.

                                                                                  1. That language complexity is equally distributed among language users.

                                                                                  Many years ago, when C++ was more or less where Rust will be in a few years, you’d hear things like “oh yeah, we’re a C++ shop, but we don’t use multiple inheritance/exceptions/whatever”. Because of the sheer complexity of the language, very few people really knew, like, all of it, and most of them were either the ones writing compilers, or language evangelists who didn’t have to live with boring code for more than a consulting contract’s length. This led to the development of all sorts of “local dialects” that made integrating third-party code a nightmare. This is, I think, one of the reasons why Boost was so successful. For a long time, there were a lot of things in Boost that were much better implemented elsewhere. But getting these other implementations to work together – or, actually, just getting them to work with your compiler and compiler flags – was really unpleasant.

                                                                                  1. That there are sufficiently few abstractions in the world of software that you can probably handle them all at the language level

                                                                                  I don’t have proof for this, it’s just an opinion. IMHO there are so many abstractions out there, many of them specific to all sorts of problems, that the chances of a language ever encompassing a sufficiently large set of them in a practical manner are close to zero. More concisely: I think that hoping to solve all programming problems by providing a specific, highly expressive abstraction for each of them is about as productive as hoping to solve them by reducing them all to lambda calculus.

                                                                                  1. 7

                                                                                    Interesting response, thank you. I feel like neither this nor the article have a tone that is so absolutist or fanatical that there is a risk of a flamewar. I am glad I read both and feel I understand more about this topic now.

                                                                                    It seems to me after reading this that the best solution for language complexity is exactly what we are all doing already:

                                                                                    Design some languages, observe what patterns arise, extend those languages to simplify and codify those patterns, repeat until the language becomes too clunky and over-complicated. Then create new languages that use the lessons learned in the previous generation, but avoid the pitfalls. Patterns that are successful and widely used will slowly permeate every language removing the need to learn them when changing languages (apart from syntax differences).

                                                                                    As long as there is a rich enough ecosystem everyone can choose the level of complexity they need/want, and the average expressiveness to complexity ratio for all languages should increase over time.

                                                                                    1. 9

                                                                                      Design some languages, observe what patterns arise, extend those languages to simplify and codify those patterns, repeat until the language becomes too clunky and over-complicated. Then create new languages that use the lessons learned in the previous generation, but avoid the pitfalls. Patterns that are successful and widely used will slowly permeate every language removing the need to learn them when changing languages (apart from syntax differences).

                                                                                      I like the sound of this but it seems to rely on the unstated assumption that good design plays a large part in language adoption, which I believe is disproved by looking at … gestures at “the industry”

                                                                                      But you’re absolutely right that learning to identify emergent patterns is probably the most important skill in designing a language. The problem with the article IMO is that it says “being complicated is OK” which IMO is a major oversimplification. The right conclusion to draw is that you have a limited complexity budget, and you need to spend it wisely. Refusing to add any complexity at all is a mistake just as adding complexity on redundant features is a mistake. In the end it’s kind of a rehash of the “accidental vs essential complexity” discussion in Out of the Tarpit.

                                                                                      1. 2

                                                                                        seems to rely on the unstated assumption that good design plays a large part in language adoption, which I believe is disproved

                                                                                        I agree, and xigoi below made much the same point. But I believe that it is unproductive to try to dictate to others what language they should use. Sure if they would listen it would probably make their lives easier, but you can’t tell people they are wrong it simply does not help. What this system at least offers is that those with an open mind and a willingness to try new things at least have the opportunity to get better tools regularly. The industry does follow along eventually, after all most of the world isn’t using the languages from the 80s anymore, but if you are spending millions developing commercial software it is not practical switch languages frequently so some lag is to be expected.

                                                                                      2. 3

                                                                                        Oh, yeah, I don’t think this is inflamatory in any way :). I just… I usually prefer to stay away from discussions on the relative merits of languages and various tools. They’re generally not very productive.

                                                                                        1. 3

                                                                                          The only problem with this is the part where nobody will use the better languages because “there’s nothing wrong with ${old language}”.

                                                                                          1. 2

                                                                                            It’s really hard to explain to people that a language is better because it can’t do something. When Java was introduced, people complained that it couldn’t do pointer arithmetic yet the lack of pointer operations that can violate type safety is a key benefit of languages like Java. It’s easy to explain that language X is better than language Y because X has a feature that Y doesn’t. It’s much harder to explain that language Y is better because it lacks a feature that X has and this feature makes it easier to introduce bugs / harder to reason locally about the effects of code.

                                                                                            1. 2

                                                                                              That reminds me of a former colleague of mine who had been freelancing for a long time with PHP and didn’t understand my fascination with other languages. He claimed PHP was the best language and would go on lyrically about how great its array functionality is. When asked what languages he knew… PHP. Oh, and Pascal from back in school.

                                                                                              1. 1

                                                                                                As someone who has been pitching Rust for years, I don’t fear this argument at all. That’s an “that’s all I have left” argument.

                                                                                                Now there’s multiple cases: 1) there’s something wrong with the old language, they just don’t see it yet. 2) there’s something wrong with the old language, but they’ve made the calculation of how much the new language would fix 3) the new language doesn’t meet their needs 4) they just need to hold a line and are not willing to switch anyways and not give any thought to the problem.

                                                                                                1), 2), 3) can change very fast - the key here is taking that answer at face value, they will probably come back to you. 4) you don’t want to work with them anyways.

                                                                                            2. 3

                                                                                              Some of them are really good building blocks for higher-level abstractions, and you can express many things with them. Others not so much. Having more of them doesn’t necessarily make a language better at expressing things than another one.

                                                                                              This reminds me of Lua’s “mechanisms instead of policies” principle.

                                                                                            1. 1

                                                                                              The software support is the biggest reason. iPhones get years of security and feature updates. I can’t think of any Android phone that gets updates as long as an iPhone 6S.

                                                                                              Neither of those platforms offer real user ‘repair’ of the software so the comparison is not a useful one here. You might argue that you can more easily write software for your android phone than an iphone, or that it is possible even to create your own version of android or install one from someone else. The reality however is that this is hard and complicated. It is definitely not an option that the average consumer has available. For most people android is exactly as much of a walled garden as ios. I am a programmer but when I tried getting an operating system I had control over on a samsung years back I gave up in frustration after hours of failures. I also soft bricked it because of vendor lock-in measures.

                                                                                              Compare this with an actual right to repair phone, such as the pine phone. I got one to test it out and although I would not recommend it to my parents, I was able to flash the os and try out a bunch of different distros. I think I could even teach my parents to flash a pine phone, I just don’t think that is something they want to do. I can use ubuntu, debian, arch, just to name some big ones. I am not afraid that there will no longer be a supported OS for my phone in 10 years. I might not have the newest developments but the archived images will still work and the full functionality the phone currently has it will have until it falls apart. Also I can write my own software for it as trivially as making a .py file.

                                                                                              What apple and google call ‘support’ I neither want nor need. I also feel that support is a massive euphemism for what they really offer which is more like a combination of big brother and nanny. I want to own my devices not rent them from a megacorp.

                                                                                              I feel this also reflects the general tone of the article. Rather than focusing on what right to repair could mean if applied sensibly, it merely lists a few ways in which it could be applied that are not very effective or useful.

                                                                                              1. 1

                                                                                                The same line rubs me the wrong way. The iPhone 6S was released 5 years ago, as was the FairPhone 2; they’re both officially supported during the same time. Looking at older generations to see how long support might go for (there’s no commitment from Apple that I can find), the iPhone 5 was released in 2012 and had its last software update in 2019. 7 years is very good for official support, and does beat any Android device thus far, but the difference is that there is still support and security updates available for the Nexus 4 and Galaxy S3 (both released 2012) via third parties such as e.foundation. Those updates are expected to continue as well.

                                                                                                As noted, not everyone has an interest or the ability to install a third party ROM, but the availability of them means that it’s reasonable to keep these old devices running well beyond the official manufacturer’s intent, which is something that is not possible with iPhones. Once Apple gives up on them, they’re done.

                                                                                                I agree that the Pinephone (and Librem 5) are likely to be even better for longevity, and I hope that postmarketOS matures on those platforms and then the gains can be brought to other (mainly Android) devices through community support.

                                                                                                I appreciate the emphasis on trade-offs that the article tries to make, but I also agree that it appears to be unbalanced in the focus on somewhat strawman arguments over particular technology choices. It’s good to think about some of these potential issues (which I think was the intent) but then the tone of the article and the solutions provided end up taking away from that point.

                                                                                                1. 1

                                                                                                  What apple and google call ‘support’ I neither want nor need. I also feel that support is a massive euphemism for what they really offer which is more like a combination of big brother and nanny. I want to own my devices not rent them from a megacorp.

                                                                                                  Absolutely. But…

                                                                                                  I think I could even teach my parents to flash a pine phone, I just don’t think that is something they want to do.

                                                                                                  And it’s not just parents. Programmers also tend to hate “yak shaving”. There are so many cases people are too lazy/uninterested to even want to exercise the control they could potentially have, it’s not a huge market. The mass marketed devices all have no control and people are just fine with it. The second-order effects that this lack of control brings might be a problem, but most people aren’t aware that this is how it works, and just shrug and buy a new gadget.

                                                                                                  I guess I’m just getting old and cynical, but I see very little hope for things really improving.

                                                                                                  1. 1

                                                                                                    I guess I’m just getting old and cynical, but I see very little hope for things really improving.

                                                                                                    Actually the right to repair train has a lot of momentum in the European parliament and some companies that sell to Europe are already taking steps to comply. If you live in the US there might be more grounds for cynicism, but then again you can always buy your tech from European companies.

                                                                                                    It is not about the majority that might not be interested in a particular freedom, it is about the few who are. Not many people use mechanical typewriters or listen to Blind Melon but the government should still protect the right to do so. Also most people do get a bit upset when they find out the phone they paid 1k for has a dead battery and they are not legally allowed or practically able to replace it. People also would love to be able to go to a repair shop and pay a reasonable price for a basic component replacement. Most of them may have forgotten that that was ever a thing but they would pick it up again pretty fast.

                                                                                                1. 6

                                                                                                  I think the idea that software doesn’t wear out is important, but after that, the perspective that software doesn’t fail doesn’t really help you:

                                                                                                  You never have anything close to a full specification, so it’s impossible to know if it’s correct or not.

                                                                                                  Your software will have to run in a large, changing range of software and hardware environments, that are all vastly under specified. So it might well work here and now, but not there and then. Bitrot is real.

                                                                                                  For example, you have to make a bunch of assumptions about what the OS or the CPU will do. But then Spectre comes along, or a bug in a dependency or platform. It does not help to argue “my software is correct” - it still does the wrong thing and still has to be fixed.

                                                                                                  1. 4

                                                                                                    An excellent example of how adhering to rigid definitions removes all utility from language. We can’t say software fails for the reasons the article describes, which are technically correct. We can only say that software does not work, and never did. But in reality according to the definitions set out, there is no non-trivial software that does anything useful and also works. So we can not say that a given piece of software works. So we are left with no language to describe the difference between two pieces of software, one of which works most of the time and one of which is plagued by constant failures and bugs.

                                                                                                    I am normally the one advocating for rigid definitions, but they have to be useful ones. The main hallmark of a bad definition is that it encompasses almost everything, or almost nothing within the set that it divides. In this case the word ‘working’ when applied to software encompasses almost nothing. This means that it is a bad (non-useful) definition.