Threads for calvin

  1.  

    I don’t have a great deal of sympathy with this.

    FreeBSD’s run-time linker has supported GNU-style hashes since 2013. I think glibc has supported it for at least 2-3 years longer. Almost anything linked since then will have used --hash-style=both and so not notice. Anyone who has created a binary since then that this doesn’t work with has explicitly chosen to opt out of faster load times.

    If you want a compat version for such programs, then you can always build glibc with --hash-style=both. You could easily create a container base layer for running such programs.

    1. 7

      To clarify: when you say you don’t have a great deal of sympathy “with this”, do you mean the situation on Linux? Or with this article? Because if the latter, the entire point here is that WINE has provided an unintentionally stable ABI despite Linux distros not reliably doing or supporting those things, and if the former, his entire point is that Linux isn’t doing things it could do to avoid this in the way that it sounds like FreeBSD did.

      1.  

        That, and it sounds like DT_GNU_HASH is obscure due to its position as a small piece of ELF, and not very well documented.

    1. 10

      Unless gitea or sr.ht solution provide a decentralized login system in a way that users shouldn’t have to sign up for an account to open issue or send MR to any random project they find interesting, I find moving away from GitHub actually detrimental to FOSS contributions at least for new-comers.

      1. 13

        I think it depends on what you consider the goals of “FOSS contributions” to be. Some people, myself included, are becoming a bit uncomfortable with the centralization and inertia that are locked up behind Github, on top of the fact that they have been pretty cavalier with releasing tools that enable people to violate established FLOSS licenses with ease.

        1. 10

          You don’t have to sign up for an account on sr.ht to open issues or send MRs.

          You should have an email client that you like though.

          1. 13

            You should have an email client that you like though.

            I hate git-send-email more than I hate GitHub. I reckon a lot of people are like me.

            1.  

              I fear it because I’ve never used it.

              Its probably not a bad tool, depending on your setup for email. Everybody who contributes being at least technical enough to set up an email client might even help raise the value of contributions.

              At least that’s the hope.

              1.  

                You don’t need an email client for git-send-email. It does everything. You just need to give it your SMTP details https://git-send-email.io

          2. 7

            Not for long! Gitea is actively developing federation using the ActivityPub-based Forgefed protocol so that you’ll be able to interact with users and repos on any Gitea instance using only a single Gitea account. Some federation features will be probably be included in the next major Gitea release, 1.18.

            Also, both Forgefriends and ForgeFlux are developing ForgeFed proxies for GitHub, GitLab, and sr.ht, so eventually (in a year?), you’ll even be able to use a single Gitea account to interact with GitHub repos.

            1. 5

              You’re right, but Github already has that problem. It’s just that by now everyone has a Github account.

              So what they really need is a big “login with your Github account” button? ‘Cause github sure doesn’t take account logins from other things.

              1.  

                frankly I think github is the yahoo for developers. Sold once too often. And past it’s time. Hell, was it cool once.

              2. 6

                I really hope for stuff like Federated software forges

                1.  

                  gitea does offer an OAuth2 provider and I believe has the capability of accepting logins from GitHub or GitLab, you just need to set it up. Personally, I don’t think projects like that will catch on the same way as GitHub just because they require a level of effort to maintain and keep running. For small personal projects, or niche stuff, sure go for it.

                  Also, I think Sourcehut (although I still like to call it “sir hat”) has a way to contribute via email alone, even bug reports. It does use git-send-email for mailing lists, but you can also email the project with an issue or bug report and converse with folks without ever needing an account. Generally, it feels like Sourcehut tries to stick to traditional collaboration techniques with Git, which works out well for personal projects and projects maintained/used by technical-oriented people. I like its compromise between fully owning your own project management tooling (and having to set up all those mailing lists, Git/web hosting, and IRC stuff yourself) and dealing with however GitHub decided to build their pull request UI, or force your users to create a GitHub account just to contribute.

                  1.  

                    coughopenid

                    But honestly these days the kind of users you talk about will be ok with a log in with GitHub or log in with Google button

                  1. 4

                    FWIW, downstreams don’t seem to be a fan of this - at least Alpine.

                    1. 3

                      The Alpine reasoning seems flawed to me. Having a problem with the fact that a package has decided to hard code a host when the user only specified the protocol and resource (i.e. you said https://foo.jpg so we’re just gonna pick https://rando.com/foo.jpg) is one thing. But the reasoning given for rejecting the package was that the actual host hard coded may become malicious at some unspecified point in the future. The same could be said about literally any HTTPS endpoint anywhere.

                      1. 5

                        IPFS is a content distribution platform, so there’s going to be content that the “media mafia” doesn’t like (just like torrents). ffmpeg is a tool for consuming such content. If you have a (silent) hardcoded gateway/proxy for IPFS within ffmpeg, and IPFS takes off as the next torrent, and everyone happily uses the same proxy, it’s possible for copyright enforcement agents to silently suborn the proxy and keep it running to gather evidence.

                    1. 5

                      I enjoy Jeff’s investigations but I fear any month now he’s going to snap and sell his mac forever.

                      1. 6

                        Honestly, he’s very much a self-hating Mac user - I don’t know what value he’s getting out of the platform based on how much he actively fights it and dislikes the direction it’s going in. He’d probably be a lot happier if he did just snap and buy a Surface or whatever. (I speak as a Mac user mostly satisfied with the platform.)

                        1. 3

                          I kind of get it. I spent a while doing more or less the same thing, only on Linux. It’s hard to throw away twenty years’ worth of muscle memory, knowledge and intuition (even if some of it is wonky because it’s been technically ported from FreeBSD :-P). It’s also hard to justify it: you can be super productive on a bad platform that you know really well, especially if knowing it well allows you to bypass the things that don’t work well. I’m a blissfully indifferent Mac user now. He’ll snap, too, and it’ll all feel better eventually :-D.

                      1. 5

                        The hype cycle around Kubernetes has been interesting to watch.

                        What’s left after close to a decade is (in my opinion after deploying K8s at two different companies at this point) are couple very good managed providers (GKE probably being the best, EKS is fine), and some good basic tooling to get most of your workloads running on there stably. Kubernetes, when left to handle stateless workloads that need to auto-scale, is pretty hard to beat for costs and performance vs. higher level abstraction managed offerings like Fargate with a bit of a learning curve.

                        There are some bodies littered on the road behind us though, namely:

                        • running your own cluster NOT managed by a major provider on cloud VMs takes a lot of work (doable! but a lot of work.), bare metal is borderline futile
                        • YAML was a mistake
                        • statefulsets and PVC’s are really provider sensitive and can be a major pain still (there are a couple notable exceptions to this, including CockroachDB running their managed offering on GKE with PVCs)
                        • security and hardening are non-trivial though managed providers help a ton here
                        • most of the ecosystem not created by Google can be written off

                        I’m curious what the long term story around Kubernetes is, but with a solid core of primitives it handles a very specific slice of infrastructure really well, and I’ve overall been happy with it.

                        1. 6

                          My suspicion about k8s is it’s not interesting for scaling (most people don’t need to scale), but more for a configuration based approach to service management. You could build a modern day MMC or cPanel based on it.

                          1. 5

                            It seems GNOME’s building for a wide audience of “normies” while their actual users are “geeks”. Their hear in the right place wanting accessible and nice looking UI but the completely miss what their users want. They want freedom to tinker and break their stuff at expense of accessibility and nice UI.

                            GNOME should stop fighting their users and stop breaking stuff out of spite. Any support request for a broken theme should be redirected to distros who shipped it. Yes, it’s a big burden and might look like finger pointing at times but so is the cost of FOSS. As OP rightly mentioned no one has infinite support capacity and most GNOME users understand that.

                            1. 18

                              I’m a GNOME user, very much a geek, and I love the direction they’re taking. I don’t want to mess with my UI, I want it to get the out of my way and let me use the computer. GNOME does that spectacularly well, much better than any other DE I have tried over the years. I love that I don’t have to tinker with it, because that lets me focus on what I want to do, rather than having to fight my DE. I do not enjoy tinkering with my desktop, it is not my area of interest. If it would be, I’d use something else, that’s the beauty of having a diverse set of options. That GNOME focuses on providing an accessible, consistent experience out of the box with only a few knobs to tweak, is great. It’s perfect for those of us - geek or non-geek alike, and anything inbetween - who just want to get shit done, and honestly not care about tweaking it to the last detail.

                              GNOME stays out of my way, doesn’t overwhelm me with tweaks and knobs I couldn’t care less about. It’s perfect. It’s perfect for me, a geek who keeps tweaking stuff that matters to him (like, my keyboard firmware is still not quite where I want it to be after half a decade of tweaking it). I love tinkering with things where tinkering makes sense. Tinkering with my firmware makes me more productive, and/or the experience more ergonomic, easier on my hands and fingers. Tinkering with my editor helps me get things done faster.

                              My DE? My DE stays out of my way, why would I want to tinker with that?

                              As for theming, I’d much prefer a single theme in light & dark variants where both of them are carefully designed, than a hodge-podge of half-broken distro-branded “stuff”. The whole “lets make the distro look different” idea is silly, if you ask me. A custom splash screen, or background, or something unobtrusive like that? Sure. But aggressively theming so it’s distro-branded? Nope, no thanks. I’d much prefer if it didn’t matter whether I’m using RedHat, Ubuntu, or whatever else, and my GNOME would look the same. That’s consistent. I don’t care about the brands, it’s not useful.

                              So, dear GNOME, please keep on doing what you’re doing. People who don’t like the direction, have alternatives, if they like to tinker so much, they can switch away too. Those of us who want something that Just Works, and is well designed out of the box, we’ll stay with GNOME.

                              1. 6

                                I think the problem is, you’re not getting a desktop you don’t have to fight, you’re just getting a desktop that you can’t fight.

                                1. 12

                                  I am getting a desktop I don’t have to fight, thank you. I don’t want to fight it, either. If I wanted to, there are many other options. I prefer not to, and GNOME does what I need it to do. For me, that’s what matters.

                                  It doesn’t work for everybody, and that’s fine, there are other options, they can use something that fits their needs better. But do let GNOME fit ours.

                                  1. 4

                                    I mean, I guess I just don’t see why removing options would give you a desktop that you don’t want to fight. You don’t have to fight KDE either. The only difference, aside default preference, is that you can fight KDE if you want to.

                                    If Gnome can be a desktop you don’t have to fight without customisability, it can be a desktop you don’t have to fight with customisability just as easily.

                                    1. 5

                                      You misunderstood. I don’t care about customizability of my desktop. I want it to stay out of my way, and provide a nice, cohesive design out of the box. Simple as that. If the developers believe the best way to achieve that is libadwaita, I’m fine with that. I don’t want to tinker with my DE. If I have to, I’ll find one where I don’t.

                                      Besides, libadwaita can be customised. Perhaps not themed, as in, completely change it, but it does provide the ability to customise it. Pretty much how macOS Carbon does customisation. Personally, I find libadwaita’s customisation a lot more approachable than GTK3’s theming. It’s simpler, easier to use.

                                      1. 4

                                        I think people misunderstand - it’s not just “less options as simple for user”, but also simpler for the people maintaining the application, as the application has less permutations of configuration to test and debug.

                                  2. 4

                                    And what happens if I’m using KDE and need to use a single GNOME app?

                                    You install one GNOME app, which, so far, was automatically themed with Breeze and looked at least somewhat like a native app, and used native file pickers. Now with the recent GNOME changes, just installing a single GNOME app forces you to look at their theme, and forces you to use at their broken filepicker.

                                    Apps should try to be native to whichever desktop they’re running it, they shouldn’t forcefully bring their own desktop into whatever environment they’re in.

                                    GIMP isn’t using adwaita on Windows either, and neither should Bottles bring adwaita into my KDE desktop.

                                    1. 11

                                      And what happens if I’m using KDE and need to use a single GNOME app, and now I’m forced to look at their hideous and unusable adwaita theme?

                                      Then you go and write - or fund - a KDE alternative if you hate the GNOME look so much, and there’s no KDE alternative.

                                      GNOME is like a virus, it infests your desktop more and more.

                                      Every single toolkit is like that.

                                      QT isn’t any different. macOS’s widget set isn’t any different. Windows’ isn’t any different. They all look best in their native environments, and they’re quite horrible in others. The macOS and Windows widgets sets aren’t even portable. QT is, but even when it tries to look native, it fails miserably, and we’d be better off if it didn’t even try. It might look out of place then, but it would at least be usable. Even if it tries to look like GNOME, it doesn’t, and just makes things worse, because it looks neither GNOME-native, nor KDE/QT-native, but a weird mix of both. Yikes.

                                      GNOME is doing the right thing here. Seeing apps of a non-native widget set try to look native is horrible, having to fight to make them use their native looks rather than try - and fail - to emulate another is annoying, to say the least. I’d much prefer if QT apps looked like QT apps, whether under KDE or GNOME, or anywhere else.

                                      The only way to have a consistent look & feel is to use the same widget set, because emulating another will always, without exception, fail.

                                      Now with the recent GNOME changes, just installing a single GNOME app forces you to look at their theme, and forces you to use at their broken filepicker.

                                      Opinions. I see no problem with the GNOME file picker. If you dislike it so much, don’t install GNOME apps, help write or fund alternatives for your DE of choice.

                                      Apps should try to be native to whichever desktop they’re running it, they shouldn’t forcefully bring their own desktop into whatever environment they’re in.

                                      No, they should not. Apps should be native to whichever desktop they were designed for. It is unreasonable to expect app developers to support the myriad of different desktops and themes (because we’d have to include themes then, too).

                                      KDE/QT apps bring their own desktop to an otherwise GNOME/GTK one. Even if they try to mimic GNOME, the result is bad at best, and we’d be better of if they didn’t try. GNOME is doing the right thing by not trying to mimic something it isn’t and then fail. It stays what it is, and so should QT apps, and we’d be free of the broken stuff that stems from apps trying to pretend they’re something they really are not.

                                      GIMP isn’t using adwaita on Windows either

                                      Last I checked, GIMP isn’t even using GTK4 yet to begin with, so it doesn’t use libadwaita anywhere. They didn’t make a windows-exception, they just didn’t port GIMP to GTK4 yet. Heck, the stable version of it isn’t even GTK3, let alone 4.

                                      1. 3

                                        help write or fund alternatives for your DE of choice.

                                        Considering the funding for open source projects is limited, this means I’ll have to try to get Gnome users to stop donating to Gnome, and instead donate for my own project. I’m not sure if you actually want that to happen (because it’d mean I’d have to actively try to defund Gnome).

                                        It’d be much better if we just had one, well-funded project that looks native in multiple DEs, than separate per-DE projects

                                        1. 5

                                          Considering the funding for open source projects is limited, this means I’ll have to try to get Gnome users to stop donating to Gnome

                                          Huh? Why? They use GNOME, why would they want to fund something else? People should help projects they use.

                                          and instead donate for my own project.

                                          Find your own users. Seeing the backlash against GNOME - usually from people not even using GNOME - suggests that there’s a sizable userbase that would be interested in having alternatives to some applications that do not have non-GNOME alternatives. Perhaps that’s an opportunity there.

                                          1. 1

                                            Huh? Why? They use GNOME, why would they want to fund something else? People should help projects they use.

                                            The absolute majority of GNOME users only use it because they either don’t know of alternatives, or because they have to use a few GNOME apps because there’s no alternative. If true alternatives existed, a lot of people would stop using and funding GNOME.

                                            (This sentence was written by me using Budgie, which uses parts of GNOME, solely because I need to run a GTK based desktop just for one single app that doesn’t properly work otherwise. If I could, I’d never touch Gnome or GTK, ever)

                                            1. 5

                                              The absolute majority of GNOME users only use it because

                                              Do you have a credible source for that? Because my experience is the exact opposite. Every GNOME user I know (with wildly varying backgrounds), are aware of alternatives, yet, they use GNOME, and are in general, happy with it.

                                              If true alternatives existed, a lot of people would stop using and funding GNOME.

                                              I very much doubt that people who otherwise wouldn’t use GNOME, would fund it.

                                              solely because I need to run a GTK based desktop just for one single app that doesn’t properly work otherwise

                                              I very much doubt that there’s a GTK app that cannot be used unless you run a full GTK desktop. Link, please?

                                              1. 2

                                                n=1, but the reason I threw up my hands and stuck with GNOME on Fedora 36 was because my custom theme wasn’t entirely broken. Some apps use libadwaita and stick out like a sore thumb, though at least I can still move the window buttons to the left which is where I prefer them (for now?), but others still use the theme, and my system-wide font choices are apparently still honoured (again, for now?). But none of this means I don’t think that their UI choices are wasteful of space or find some of their design decisions personally suspect. I tolerate it, but I’m increasingly not happy with it, and eventually it will exceed my daily inertia. I have a custom window manager I’ve been working on, and I might be able to make KDE into enough of what I want that I have alternatives.

                                                1. 6

                                                  You dislike the direction GNOME is taking then. That’s fine, and understandable: neither the looks, nor their approach suits everybody. Thankfully, in the free software world, there are alternatives.

                                                  I hate that KDE has so many knobs, it’s overwhelming and distracting. The default theme looks horrible too, in my opinion. So I don’t use KDE, because I accept that I’m not their target audience. I don’t complain about it, I don’t hate on them, I am genuinely happy they take a different approach, because then other people can choose them.

                                                  Sometimes the DE we use takes a different direction than one would like. That’s a bit of a bummer, but it happens. We move on, and find something else, because we can. Or fork, that happened too before, multiple times.

                                                  Taking a different direction is not wrong. It’s just a different direction, is all. You may not like it, there are plenty who do.

                                        2. 1

                                          The macOS and Windows widgets sets aren’t even portable.

                                          Tell that to the wine darlings.

                                          1. 3

                                            Apps running under Wine stick out like a sore thumb if they’re not basically compositing everything, in which case it’s at least on purpose. I believe that was Algernon’s point.

                                            1. 1

                                              Then every widget set is cross platform, because we can just run stuff in emulators. Good luck trying to look native then!

                                              1. 4

                                                run stuff in emulators

                                                wine is not an emulator. It is an implementation of the Windows library on top of Linux. It is exactly as equally “native” as GTK and Qt, which are also just libraries implemented on top of Linux.

                                                The only question is what collection of applications you prefer. That’s really how native is defined on the linux desktop - that it fits in with the other things you commonly use.

                                          2. 3

                                            I mean you’re the one choosing to use a Gnome app. “A Gnome app looks like a Gnome app” is, at its core, something that makes sense imo.

                                            That said I would like for there to be more unification on the low hanging fruit.

                                        3. 10

                                          It’s not “spite” - there are a million Linux desktops for tinkering and breaking. Give “normies” something productive and usable in the meanwhile and they might not all neglect what could be the best platform for their purposes. I use Gnome 4(?) on Wayland and it’s great - I had it basically looking clean enough as macOS without the ugly icons in like 10 minutes. Real geeks waste their time in the terminal anyway, not customising it. (:p)

                                          1. 6

                                            It’s not “spite”

                                            Well, what is it then? For decades GNOME had flexibility, users created horribly broken themes and everyone was more or less happy. GNOME was happy to have users. Users were happy they had freedom to do whatever. Yes, not everything was perfect. Custom widgets were mostly broken, accessibility was lacking, etc.

                                            As I said, GNOME’s heart in the right place to want to have a working/accessible default but does it have to be at expense of flexibility? OP presents it as if there’s only two options: either we let users do whatever, or we have a good nice looking theme. And the main driving force behind the decision to remove configurability was distros having a bad default theme.

                                            I think GNOME is completely misguided in their approach. Instead of creating a good, pretty, accessible default theme and telling people use this if you want a good, pretty, accessible theme, they decided they won’t let distros break their default theme and lump in users into the distro category. It goes completely against the spirit of FOSS. Instead of creating better options for users they chose to remove options.

                                          2. 8

                                            It seems GNOME’s building for a wide audience of “normies” while their actual users are “geeks”. Their hear in the right place wanting accessible and nice looking UI but the completely miss what their users want. They want freedom to tinker and break their stuff at expense of accessibility and nice UI.

                                            I mean, technical professionals are trying to get their job done. Give me a desktop that works well, and I don’t want to touch it beyond using it. I want to work with compilers, not window managers.

                                            1. 6

                                              Give me a desktop that works well, and I don’t want to touch it beyond using it. I want to work with compilers, not window managers.

                                              I’ve said before that this is why Apple ended up being the manufacturer of the default “developer laptop”. They never really set out to do that, they just wanted to make nice and powerfully-spec’d machines targeting a broad “pro” market. But as a result of accidents of their corporate history, they ended up doing what no Linux distro vendor ever managed: ship something that works well and is Unix-y enough for developers at the same time.

                                              I ran various Linux distros as my primary desktop operating system for much of the 00s, and I know my first experience with an Apple laptop and OS X was a breath of fresh air.

                                          1. 1

                                            Is it just me, or can I not interact with it at all? There’s allegedly a text box to enter Lean into, but for me, it’s totally empty. I can’t seem to get it to work on Firefox, Chrome, or Safari.

                                            1. 3

                                              BSD make is great for small projects which don’t have a lot of files and do not have any compile time option. For larger projects in which you want to enable/disable options at compilation time, you might have to use a more complete build system.

                                              Here’s the problem: Every large project was once a small project. The FreeBSD build system, which is built on top of bmake, is an absolute nightmare to use. It is slow, impossible to modify, and when it breaks it’s completely incomprehensible trying to find out why.

                                              For small projects, a CMake build system is typically 4-5 lines of CMake, so bmake isn’t really a win here, but CMake can grow a lot bigger before it becomes an unmaintainable mess and it’s improving all of the time. Oh, and it can also generate the compile_commands.json that your LSP implementation (clangd or whatever) uses to do syntax highlighting. I have never managed to make this work with bmake (@MaskRay published a script to do it but it never worked for me).

                                              1. 17

                                                The problem is that cmake is actually literal hell to use. I would much rather use even the shittiest makefile than cmake.

                                                Some of the “modern” cmake stuff is slightly less horrible. Maybe if the cmake community had moved on to using targets, things would’ve been a little better. But most of the time, you’re still stuck with ${FOO_INCLUDE_DIRS} and ${FOO_LIBRARIES}. And the absolutely terrible syntax and stringly typed nature won’t ever change.

                                                Give me literally any build system – including an ad-hoc shell script – over cmake.

                                                1. 6

                                                  Agreed. Personally, I also detest meson/ninja in the same way. The only thing that I can tolerate writing AND using are BSD makefiles, POSIX makefiles, and plan9’s mkfiles

                                                  1. 2

                                                    You are going to have a very fun time dealing with portability. Shared libraries, anyone?

                                                    1. 2

                                                      Not really a problem, pkg-config tells your makefile what cflags and ldflags/ldlibs to add.

                                                      1. 2

                                                        Using it is less the problem - creating shared libraries is much harder. Every linker is weird and special, even with ccld. As someone dealing with AIX in a dayjob…

                                                  2. 5

                                                    The problem is that cmake is actually literal hell to use. I would much rather use even the shittiest makefile than cmake.

                                                    Yes. The last time I seriously used cmake for cross compiles (trying to build third-party non-android code to integrate into an Android app) I ended up knee deep in strace to figure out which of the hundreds of thousands of lines of cmake scripts were being included from the system cmake directory, and then using gdb on a debug build of cmake to try to figure out where it was constructing the incorrect strings, because I had given up on actually being able to understand the cmake scripts themselves, and why they were double concatenating the path prefix.

                                                    Using make for the cross compile was merely quite unpleasant.

                                                    Can we improve on make? Absolutely. But cmake is not that improvement.

                                                    1. 2

                                                      What were you trying to build? I have cross-compiled hundreds of CMake things and I don’t think I’ve ever needed to do anything other than give it a cross-compile toolchain file on the command line. Oh, and that was cross-compiling for an experimental CPU, so no off-the-shelf support from anything, yet CMake required me to write a 10-line text file and pass it on the command line.

                                                      1. 2

                                                        This was in 2019-ish, so I don’t remember which of the ported packages it was. It may have been some differential equation packages, opencv, or some other packages. There was some odd interaction between their cmake files and the android toolchain’s cmake helpers that lead to duplicated build directory prefixes like:

                                                         /home/ori/android/ndk//home/ori/android/ndk/$filepath
                                                        

                                                        which was nearly impossible to debug. The fix was easy once I found the mis-expanded variable, but tracking it down was insanely painful. The happy path with cmake isn’t great but the sad path is bad enough that I’m not touching it in any new software I write.

                                                        1. 2

                                                          The happy path with cmake isn’t great but the sad path is bad enough that I’m not touching it in any new software I write.

                                                          The sad path with bmake is far sadder. I spent half a day trying to convince a bmake-based build system to compile the output from yacc as C++ instead of C before giving up. There was some magic somewhere but I have no idea where and a non-trivial bmake build system spans dozens of include files with syntax that looks like line noise. I’ll take add_target_option over ${M:asdfasdfgkjnerihna} any day.

                                                          1. 3

                                                            You’re describing the happy path.

                                                            Cmake ships with just over 112,000 lines of modules, and it seems any non trivial project gets between hundreds and thousands of lines of additional cmake customizations and copy-pasted modules on top of that. And if anything goes wrong in there, you need to get in and debug that code. In my experience, it often does.

                                                            With make, its usually easier to debug because there just isn’t as much crap pulled in. And even when there is, I can hack around it with a specific, ad-hoc target. With cmake, if something goes wrong deep inside it, I expect to spend a week getting it to work. And because I only touch cmake if I have to, I usually don’t have the choice of giving up – I just have to deal with it.

                                                            I’m very happy that these last couple years, I spend much of my paid time writing Go, and not dealing with other people’s broken build systems.

                                                            1. 1

                                                              Cmake ships with just over 112,000 lines of modules, and it seems any non trivial project gets between hundreds and thousands of lines of additional cmake customizations and copy-pasted modules on top of that.

                                                              The core bmake files are over 10KLoC, which doesn’t include the built-in rules, and do far less than the CMake standard library (which includes cross compilation, finding dependencies using various tools, and so on). They are not namespaced, because bmake does not have any notion of scopes for variables, and so any one of them may define some variable that another consumes and

                                                              With make, its usually easier to debug because there just isn’t as much crap pulled in.

                                                              That is not my experience with any large project that I’ve worked on with a bmake or GNU make build system. They build some half-arsed analogue of a load of the CMake modules and, because there’s no notion of variable scope in these systems, everything depends on some variable that is set somewhere in a file that’s included at three levels of indirection by the thing that includes the Makefile for the component that you’re currently looking at. Everything is spooky action at a distance. You can’t find the thing that’s setting the variable, because it’s constructing the variable name by applying some complex pattern to the string. When I do find it, instead of functions with human-readable names, I discover that it’s a line like _LDADD_FROM_DPADD= ${DPADD:R:T:C;^lib(.*)$;-l\1;g} (actual line from a bmake project, far from the worst I’ve seen, just the first one that jumped out opening a random .mk file), which is far less readable than anything I’ve ever read in any non-Perl language.

                                                              In contrast, modern CMake has properties on targets and the core modules are work with this kind of abstraction. There are a few places where some global variables still apply, but these are easy to find with grep. Everything else is scoped. If a target is doing something wrong, then I need to look at how that target is constructed. It may be as a result of some included modules, but finding they relevant part is usually easy.

                                                              The largest project that I’ve worked on with a CMake build system is LLVM, which has about 7KLoC of custom CMake modules. It’s not wonderful, but it’s far easier to modify the build system than I’ve found for make-based projects a tenth the size. The total time that I’ve wasted on CMake hacking for it over the last 15 years is less than a day. The time I’ve wasted failing to get Make-based (GNU Make or bmake) projects to do what I want is weeks over the same period.

                                                    2. 3

                                                      Modern CMake is a lot better and it’s being aggressively pushed because things like vcpkg require modern CMake, or require you to wrap your crufty CMake in something with proper exported targets. Importing external dependencies.

                                                      I’ve worked on projects with large CMake infrastructure, large GNU make infrastructure, and large bmake infrastructure. I have endured vastly less suffering as a result of the CMake infrastructure than the other two. I have spent entire days trying to change things in make-based build systems and given up, whereas CMake I’ve just complained about how ugly the macro language is.

                                                      1. 2

                                                        Would you be interested to try build2? I am willing to do some hand-holding (e.g., answer “How do I ..?” questions, etc) if that helps.

                                                        To give a few points of comparison based on topics brought up in other comments:

                                                        1. The simple executable buildfile would be a one-liner like this:

                                                          exe{my-prog}: c{src1} cxx{src2}
                                                          

                                                          With the libzstd dependency:

                                                          import libs = libzstd%lib{zstd}
                                                          
                                                          exe{my-prog}: c{src1} cxx{src2} $libs
                                                          
                                                        2. Here is a buildfile from a library (Linux Kconfig configuration system) that uses lex/yacc: https://github.com/build2-packaging/kconfig/blob/master/liblkc/liblkc/buildfile

                                                        3. We have a separate section in the manual on the available build debugging mechanisms: https://build2.org/build2/doc/build2-build-system-manual.xhtml#intro-diag-debug

                                                        4. We have a collection of HOWTOs that may be of interest: https://github.com/build2/HOWTO/#readme

                                                        1. 3

                                                          I like the idea of build2. I was hoping for a long time that Jon Anderson would finish Fabrique, which had some very nice properties (merging of objects for inheriting flags, a file type in the language that was distinct from a string and could be mapped to a path or a file descriptor on invocation).

                                                          exe{my-prog}: c{src1} cxx{src2}

                                                          Perhaps it’s just me, but I really don’t find that to be great syntax. Software in general (totally plausible rule of thumb that I was told and believe) is read around 10 times more than it is written. For build systems, that’s probably closer to 100, so terse syntax scares me.

                                                          The problem I have now is ecosystem lock in. 90% of the things that I want to depend on provides a CMake exported project. I can use vcpkg to grab thousands of libraries to statically link against and everything just works. From this example:

                                                          With the libzstd dependency:

                                                          import libs = libzstd%lib{zstd}

                                                          How does it find zstd? Does it rely on an export target that zstd exposed, a built-in package, or some other mechanism?

                                                          CMake isn’t what I want, but I can see a fairly clear path to evolving it to be what I want. I don’t see that path for replacing it with something new and for the new thing to be worth replacing CMake it would need to be an order of magnitude better for my projects and able to consume CMake exported targets from other projects (not pkg-config, which can’t even provide flags for compiler invocations for Objective-C, let alone handle any of the difficult configuration cases). If it can consume CMake exported targets, then my incentive for libraries is to use CMake because then I can export a target that both it and CMake can consume.

                                                          1. 2

                                                            Perhaps it’s just me, but I really don’t find that to be great syntax. Software in general (totally plausible rule of thumb that I was told and believe) is read around 10 times more than it is written. For build systems, that’s probably closer to 100, so terse syntax scares me.

                                                            No, it’s not just you, this is a fairly common complaint from people who first see it but interestingly not from people who used build2 for some time (we ran a survey). I believe the terse syntax is beneficial for common constructs (and what I’ve shown is definitely one of the most common) because it doesn’t get in the way when trying to understand more complex buildfiles. At least this has been my experience.

                                                            How does it find zstd? Does it rely on an export target that zstd exposed, a built-in package, or some other mechanism?

                                                            That depends on whether you are using just the build system or the build system and the package manager stack. If just the build system, then you can either specify the development build to import explicitly (e.g., config.import.libzstd=/tmp/libzstd), bundle it with your project (in which it gets found automatically) or, failed all of the above, build2 will try to find the installed version (and extract additional options/libraries from pkg-config files, if any).

                                                            If you are using the package manager, then by default it will download and build libzstd from the package (but you can also instruct the package manager to use the system-installed version if you prefer). We happen to have the libzstd package sitting in the submission queue: https://queue.cppget.org/libzstd

                                                            But that’s a pretty vanilla case that most tools can handle these days. The more interesting one is lex/yacc from the buidfile I linked. It uses the same import mechanism to find the tools:

                                                            import! [metadata] yacc = byacc%exe{byacc}
                                                            import! [metadata] flex = reflex%exe{reflex}
                                                            

                                                            And we have them packaged: https://cppget.org/reflex and https://cppget.org/byacc. And the package manager will download and build them for you. And it’s smart enough to know to do it in a seperate host configuration so that they can still be executed during the build even if you are cross-compiling. This works auto-magiclaly, even on Windows. (Another handy tool that can be used like that is xxd: https://cppget.org/xxd).

                                                            CMake isn’t what I want, but I can see a fairly clear path to evolving it to be what I want. I don’t see that path for replacing it with something new and for the new thing to be worth replacing CMake it would need to be an order of magnitude better for my projects.

                                                            I am clearly biased but I think it’s actually not that difficult to be an order of magnitude better than CMake, it’s just really difficult to see if all you’ve experienced is CMake (and maybe some make-based projects).

                                                            Firstly, CMake is a meta build system which closes the door on quite a few things (for an example, check how CMake plans to support C++20 modules; in short it’s a “let’s pre-scan the world” approach). Then, on one side of this meta build system sandwich you have a really primitive build model with the famous CMake macro language. On the other you have the lowest common denominator problem of the underlying build systems. Even arguably the best of them (ninja) is quite a basic tool. The result is that every new functionality, say support for a new source code generator, has to be implemented in this dreaded macro language with an eye on the underlying build tools. In build2, in contrast, you can implement you own build system module in C++ and the toolchain will fetch, build, and load it for you automatically (pretty much the same as the lex/yacc tools above). Here is a demo I’ve made of a fairly elaborate source code generator setup for a user (reportedly it took a lot of hacking around to support in CMake and was the motivation for them to switch to build2): https://github.com/build2/build2-dynamic-target-group-demo/

                                                            1. 3

                                                              No, it’s not just you, this is a fairly common complaint from people who first see it but interestingly not from people who used build2 for some time (we ran a survey)

                                                              That’s a great distinction to make. Terse syntax is fine for operations that I will read every time I look in the file, but it’s awful for things that I’ll see once every few months. I don’t know enough about build2 to comment on where it falls on this spectrum.

                                                              For me, the litmus test of a build systems is one that is very hard to apply to new ones: If I want to modify a build system for a large project that has aggregated for 10-20 years, how easy is it for me to understand their custom parts? CMake is not wonderful here, but generally the functions and macros are easy to find and to read once I’ve found them. bmake is awful because its line-noise syntax is impossible to search for (how do you find what the M modifier in an expression does in the documentation? “M” as a search string gives a lot of false positives!).

                                                              That depends on whether you are using just the build system or the build system and the package manager stack. If just the build system, then you can either specify the development build to import explicitly (e.g., config.import.libzstd=/tmp/libzstd), bundle it with your project (in which it gets found automatically) or, failed all of the above, build2 will try to find the installed version (and extract additional options/libraries from pkg-config files, if any).

                                                              My experience with pkg-config is not very positive. It just about works for trivial options but is not sufficiently expressive for even simple things like different flags for debug and release builds, let alone anything with custom configuration options.

                                                              If you are using the package manager, then by default it will download and build libzstd from the package (but you can also instruct the package manager to use the system-installed version if you prefer). We happen to have the libzstd package sitting in the submission queue: https://queue.cppget.org/libzstd

                                                              That looks a lot more promising, especially being able to use the system-installed version. Do you provide some ontology that allows systems to map build2 package names to installed packages so that someone packaging a project that I build with build2 without having to do this translation for everything that they package?

                                                              And we have them packaged: https://cppget.org/reflex and https://cppget.org/byacc. And the package manager will download and build them for you. And it’s smart enough to know to do it in a seperate host configuration so that they can still be executed during the build even if you are cross-compiling. This works auto-magiclaly, even on Windows. (Another handy tool that can be used like that is xxd: https://cppget.org/xxd).

                                                              This is a very nice property, though one that I already get from vcpkg + CMake.

                                                              Firstly, CMake is a meta build system which closes the door on quite a few things (for an example, check how CMake plans to support C++20 modules; in short it’s a “let’s pre-scan the world” approach). Then, on one side of this meta build system sandwich you have a really primitive build model with the famous CMake macro language.

                                                              The language is pretty awful, but the underlying object model doesn’t seem so bad and is probably something that could be exposed to another language with some refactoring (that’s probably the first thing that I’d want to do if I seriously spent time trying to improve CMake).

                                                              In build2, in contrast, you can implement you own build system module in C++ and the toolchain will fetch, build, and load it for you automatically (pretty much the same as the lex/yacc tools above). Here is a demo I’ve made of a fairly elaborate source code generator setup for a user (reportedly it took a lot of hacking around to support in CMake and was the motivation for them to switch to build2):

                                                              That’s very interesting and might be a good reason to switch for a project that I’m currently working on.

                                                              I have struggled in the past with generated header files with CMake, because the tools can build the dependency edges during the build, but I need a coarse-grained rule for the initial build that says ‘do the step that generates these headers before trying to build this target’ and there isn’t a great way of expressing that this is a fudge and so I can break that arc for incremental builds. Does build2 have a nice model for this kind of thing?

                                                              1. 2

                                                                If I want to modify a build system for a large project that has aggregated for 10-20 years, how easy is it for me to understand their custom parts?

                                                                In build2, there are two ways to do custom things: you can write ad hoc pattern rules in a shell-like language (similar to make pattern rules, but portable and higher-level) and everything else (more elaborate rules, functions, configuration, etc) is written in C++(14). Granted C++ can be made an inscrutable mess, but at least it’s a known quantity and we try hard to keep things sane (you can get a taste of what that looks like from the build2-dynamic-target-group-demo/libbuild2-compiler module I linked to earlier).

                                                                My experience with pkg-config is not very positive. It just about works for trivial options but is not sufficiently expressive for even simple things like different flags for debug and release builds, let alone anything with custom configuration options.

                                                                pkg-config has its issues, I agree, plus most build systems don’t (or can’t) use it correctly. For example, you wouldn’t try to cram both debug and release builds into a single library binary (e.g., .a or .so; well, unless you are Apple, perhaps) so why try to cram both debug and release (or static/shared for that matter) options into the same .pc file?

                                                                Plus, besides the built-in values (Cflags, etc), pkg-config allows for free-form variables. So you can extend the format how you see fit. For example, in build2 we use the bin.whole variable to signal that the library should be linked in the “whole archive” mode (which we then translate into the appropriate linker options). Similarly, we’ve used pkg-config variable to convey C++20 modules information and it also panned out quite well. And we now convey custom C/C++ library metadata this way.

                                                                So the question is do we subsume all the existing/simple cases and continue with pkg-config by extending its format for more advanced cases or do we invent a completely new format (which is what WG21’s SG15 is currently trying to do)?

                                                                Do you provide some ontology that allows systems to map build2 package names to installed packages so that someone packaging a project that I build with build2 without having to do this translation for everything that they package?

                                                                Not yet, but we had ideas along these lines though in a different direction: we were thinking of each build2 package also providing a mapping to the system package names for the commonly used distributions (e.g., libzstd-dev for Debain/Ubuntu, libzstd-devel for Fedora/etc) so that the build2 package manager can query the installed package’s version (e.g., to make sure the version constraints are satisfied) or to invoke the system package manager to install the system package. If we had such a mapping, it would also allow us to also achieve what you are describing.

                                                                This is a very nice property, though one that I already get from vcpkg + CMake.

                                                                Interesting. So you could ask vcpkg to build you a library without even knowing it has build-time dependencies on some tools and vcpkg will automatically create a suitable host configuration, build those tools there, and pass them to the library’s so that it can execute them during its build?

                                                                If so, that’s quite impressive. For us, the “create a suitable host configuration” part turned into a particularly deep rabbit hold. What is “suitable”? In our case we’ve decided to use the same compiler/options as what was used to build build2. But what if the PATH environment variable has changed and now clang++ resolves to something else? So we had to invent a notion of hermetic build configurations where we save all the environment variables that affect every tool involved in the build (like CPATH and friends). One nice off-shot of this work is that now in non-hermetic build configurations (which are the default), we detect changes to the environment variables besides everything else (sources, options, compiler versions, etc).

                                                                I have struggled in the past with generated header files with CMake, because the tools can build the dependency edges during the build, but I need a coarse-grained rule for the initial build that says ‘do the step that generates these headers before trying to build this target’ and there isn’t a great way of expressing that this is a fudge and so I can break that arc for incremental builds. Does build2 have a nice model for this kind of thing?

                                                                Yes, in build2 you normally don’t need any fudging, the C/C++ compile rules are prepared to deal with generated headers (via -MG or similar). There are use-cases where it’s impossible to handle the generated headers fully dynamically (for example, because the compiler may pick up a wrong/outdated header from another search path) but this is also taken care of. See this article for the gory details: https://github.com/build2/HOWTO/blob/master/entries/handle-auto-generated-headers.md

                                                                That’s very interesting and might be a good reason to switch for a project that I’m currently working on.

                                                                As I mentioned earlier, I would be happy to do some hand-holding if you want to give it a try. Also, build2 is not exactly simple and has a very different mental model compared to CMake. In particular, CMake is a “mono-repo first” build system while build2 is decidedly “multi-repo first”. As a result, some things that are often taken as gospel by CMake users (like the output being a subdirectory of the source directory) is blasphemy in build2. So there might be some culture shock.

                                                                BTW, in your earlier post you’ve mentioned Fabrique by Jon Anderson but I can’t seem to find any traces of it. Do you have any links?

                                                                1. 2

                                                                  Granted C++ can be made an inscrutable mess, but at least it’s a known quantity and we try hard to keep things sane (you can get a taste of what that looks like from the build2-dynamic-target-group-demo/libbuild2-compiler module I linked to earlier).

                                                                  This makes me a bit nervous because it seems very easy for non-portable things to creep in with this. To give a concrete example, if my build environment is a cloud service then I may not have a local filesystem and anything using the standard library for file I/O will be annoying to port. Similarly, if I want to use something like Capsicum to sandbox my build then I need to ensure that descriptors for files read by these modules are provided externally.

                                                                  It looks as if the abstractions there are fairly clean, but I wonder if there’s any way of linting this. It would be quite nice if this could use WASI as the host interface (even if compiling to native code) so that you had something that at least can be made to run anywhere.

                                                                  pkg-config has its issues, I agree,

                                                                  My bias against pkg-config originates from trying to use it with Objective-C. I gave up trying to add an --objc-flags and –objcxx-flags` option because the structure of the code made this kind of extension too hard. Objective-C is built with the same compiler as C/C++ and takes mostly the same options, yet it wasn’t possible to support. This made me very nervous that the system could adapt to any changes in requirements from C/C++ and no chance of providing information for any other language. This was about 15 years ago, so it may have improved since thne.

                                                                  Not yet, but we had ideas along these lines though in a different direction: we were thinking of each build2 package also providing a mapping to the system package names for the commonly used distributions

                                                                  That feels back to front because you’re traversing the graph in the opposite direction to the edge that must exist. Someone packaging libFoo for their distribution must know where libFoo comes from and so is in a position to maintain this mapping (we could fairly trivially automate it from the FreeBSD ports system for any package that we build from a cppget source, for example). In contrast, the author of a package doesn’t always know where things come from here. I’ve looked on repology at some of my code and discovered that I haven’t even heard of a load of the distributions that package it, so expecting me to maintain a list of those (and keep it up to date with version information) sounds incredibly hard and likely to lead to a two-tier system (implicit in your use of the phrase ‘commonly used distributions’) where building on Ubuntu and Fedora is easy, building on less-popular targets is harder.

                                                                  Interesting. So you could ask vcpkg to build you a library without even knowing it has build-time dependencies on some tools and vcpkg will automatically create a suitable host configuration, build those tools there, and pass them to the library’s so that it can execute them during its build?

                                                                  Yes, but there’s a catch: vcpkg runs its builds as part of the configure stage, not as part of the build stage. This means that running cmake may take several minutes, when then running ninja completes in a second or two. If you modify vcpkg.json then this will force CMake to re-run and that will cause the packages to re-build. vcpkg packages have a notion of host tools, which are built with the triplet for your host configuration and are then exposed for the rest of the build. There are some known issues with it, so they might be starting down the same rabbit hole that you ended up with.

                                                                  Yes, in build2 you normally don’t need any fudging, the C/C++ compile rules are prepared to deal with generated headers (via -MG or similar).

                                                                  It’s the updating that I’m particularly interested in. Imagine that I have I have a make-headers build step that has sub-targets that generate foo.h and bar.h and then a I step for compiling prog.cc, which includes foo.h. On the first (non-incremental) build, I want the compile step that consumes prog.cc to depend on make-headers (big hammer, so that I don’t have to track which generated headers my prog.cc depends on). But after that I want the compiler to update the rule for prog.cc so that it depends only on foo.h. I’ve managed to produce some hacks that do this in CMake but they’re ugly and fragile. I’d love to have some explicit support for over-approximate dependencies that will be fixed during the first build. bmake’s meta mode does this by using a kernel module to watch the files that the compiler process reads and dynamically updating the build rules to depend on those. This has some nice side effects, such as causing a complete rebuild if you upgrade your compiler or a shared library that the compiler depends on.

                                                                  Negative dependencies are a separate (and more painful problem).

                                                                  As I mentioned earlier, I would be happy to do some hand-holding if you want to give it a try. Also, build2 is not exactly simple and has a very different mental model compared to CMake. In particular, CMake is a “mono-repo first” build system while build2 is decidedly “multi-repo first”. As a result, some things that are often taken as gospel by CMake users (like the output being a subdirectory of the source directory) is blasphemy in build2. So there might be some culture shock.

                                                                  All of my builds are done from a separate ZFS dataset that has sync turned off, so out-of-tree builds are normal for me, but I’ve not had any problems with that in CMake. One of the projects that I’m currently working on looks quite a lot like a cross-compile SDK and so build2 might be a good fit (we provide some build tools and components and want consumers to pick up our build system components). I’ll do some reading and see how hard it would be to port it over to build2. It’s currently only about a hundred lines of CMake, so not so big that a complete rewrite would be painful.

                                                                  1. 1

                                                                    This makes me a bit nervous because it seems very easy for non-portable things to creep in with this.

                                                                    These are interesting points that admittedly we haven’t though much about, yet. But there are plans to support distributed compilation and caching which, I am sure, will force us to think this through.

                                                                    One thing that I have been thinking about lately is how much logic should we allow one to put in a rule (since, being written in C++, there is not much that cannot be done). In other words, should rules be purely glue between the build system and the tools that do the actual work (e.g., generate some source code) or should we allow the rules to do the work themselves without any tools? To give a concrete example, it would be trivial in build2 to implement a rule that provides the xxd functionality without any external tools.

                                                                    Either way I think the bulk of the rules will still be the glue type simply because nobody will want to re-implement protoc or moc directly in the rule. Which means the problem is actually more difficult: it’s not just the rules that you need to worry about, it’s also the tools. I don’t think you will easily convince many of them to work without a local filesystem.

                                                                    That feels back to front because you’re traversing the graph in the opposite direction to the edge that must exist. Someone packaging libFoo for their distribution must know where libFoo comes from and so is in a position to maintain this mapping […]

                                                                    From this point of view, yes. But consider also this scenario: whomever is packaging libFoo for, say, Debian is not using build2 (because libFoo upstream is, say, still uses CMake) and so has no interest in maintaining this mapping.

                                                                    Perhaps this should just be a separate registry where any party (build2 package author, distribution package author, or an unrelated third party) can contribute the mapping. This will work fairly well for archive-based package repositories where we can easily merge this information into the repository metadata. But not so well for git-based where things are decentralized.

                                                                    Imagine that I have I have a make-headers build step that has sub-targets that generate foo.h and bar.h and then a step for compiling prog.cc, which includes foo.h. On the first (non-incremental) build, I want the compile step that consumes prog.cc to depend on make-headers (big hammer, so that I don’t have to track which generated headers my prog.cc depends on). But after that I want the compiler to update the rule for prog.cc so that it depends only on foo.h.

                                                                    You don’t need such “big hammer” aggregate steps in build2 (unless you must, for example, because the tool can only product all the headers at once). Here is a concrete example:

                                                                    hxx{*}: extension = h
                                                                    
                                                                    cxx.poptions += "-I$out_base" "-I$src_base"
                                                                    
                                                                    gen = foo.h bar.h
                                                                    
                                                                    ./: exe{prog1}: cxx{prog1.cc} hxx{$gen}
                                                                    ./: exe{prog2}: cxx{prog2.cc} hxx{$gen}
                                                                    
                                                                    hxx{foo.h}:
                                                                    {{
                                                                      echo '#define FOO 1' >$path($>)
                                                                    }}
                                                                    
                                                                    hxx{bar.h}:
                                                                    {{
                                                                      echo '#define BAR 1' >$path($>)
                                                                    }}
                                                                    

                                                                    Where prog1.cc looks like this (in prog2.cc substitute foo with bar):

                                                                    #include "foo.h"
                                                                    
                                                                    int main ()
                                                                    {
                                                                      return FOO;
                                                                    }
                                                                    

                                                                    While this might look a bit impure (why does exe{prog1} depends on bar.h even though none of its sources use it), this works as expected. In particular, given a fully up-to-date build, if you remove foo.h, only exe{prog1} will be rebuilt. The mental model here is that the headers you list as prerequisites of an executable or library are a “pool” from which its source can “pick” what they need.

                                                                    I’ll do some reading and see how hard it would be to port it over to build2. It’s currently only about a hundred lines of CMake, so not so big that a complete rewrite would be painful.

                                                                    Sounds good. If this is public (or I can be granted access), I could even help.

                                                                    1. 1

                                                                      Either way I think the bulk of the rules will still be the glue type simply because nobody will want to re-implement protoc or moc directly in the rule. Which means the problem is actually more difficult: it’s not just the rules that you need to worry about, it’s also the tools. I don’t think you will easily convince many of them to work without a local filesystem.

                                                                      That’s increasingly a problem. There was a post here a few months back where someone had built clang as an AWS Lambda. I expect a lot of tools in the future will end up becoming things that can be deployed on FaaS platforms and then you really want the build system to understand how to translate between two namespaces (for example, to provide a compiler with a json dictionary of name to hash mappings for a content-addressable filesytem).

                                                                      I forgot to provide you with a link to Farbique last time. I worked a bit on the design but never had time to do much implementation and Jon got distracted by other projects. We wanted to be able to run tools in Capsicum sandboxes (WASI picked up the Capsicum model, so the same requirements would apply to a WebAssembly/WASI FaaS service): the environment is responsible for opening files and providing descriptors into the tool’s world. This also has the nice property for a build system that the dependencies are, by construction, accurate: anything where you didn’t pass in a file descriptor is not able to be accessed by the task (though you can pass in directory descriptors for include directories as a coarse over approximation).

                                                                      From this point of view, yes. But consider also this scenario: whomever is packaging libFoo for, say, Debian is not using build2 (because libFoo upstream is, say, still uses CMake) and so has no interest in maintaining this mapping.

                                                                      I don’t think that person has to care, the person packaging something using libFoo needs to care and that creates an incentive for anyone packaging C/C++ libraries to keep the mapping up to date. I’d imagine that each repo would maintain this mapping. That’s really the only place where I can imagine that it can live without getting stale.

                                                                      I’m more familiar with the FreeBSD packaging setup than Debian, so there may be some key differences. FreeBSD builds a new package set from the top of the package tree every few days. There’s a short lag (typically 1-3 days) between pushing a version bump to a port and users seeing the package version. Some users stay on the quarterly branch, which is updated less frequently. If I create a port for libFoo v1.0, then it will appear in the latest package set in a couple of days and, if I time it right, in the quarterly one soon after. Upstream libFoo notices and updates their map to say ‘FreeBSD has version 1.0 and it’s called libfoo`. Now I update the port to v1.1. Instantly, the upstream mapping is wrong for anyone who is building package sets themselves. A couple of days later, it’s wrong for anyone installing packages from the latest branch. A few weeks later, it’s wrong for anyone on the quarterly branch. There is no point at which the libFoo repo can hold a map that is correct for everyone unless they have three entries for FreeBSD, and even then they need to actively watch the status of builders to get it right.

                                                                      In contrast, if I add a BUILD2_PACKAGE_NAME= and BUILD2_VERSION= line to my port (the second of which can default to the port version, so needs setting in a few corner cases), then it’s fairly easy to add some generic infrastructure to the ports system that builds a complete map for every single packaged library when you build a package set. This will then always be 100% up to date, because anyone changing a package will implicitly update it. I presume that the Debian package builders could do something similar with something in the source package manifest.

                                                                      Note that the mapping needs to contain versions as well as names because the version in the package often doesn’t directly correspond to the upstream version. This gets especially tricky when the packaged version carries patches that are not yet upstreamed.

                                                                      Oh, and options get more fun here. A lot of FreeBSD ports can build different flavours depending on the options that are set when building the package set. This needs to be part of the mapping. Again, this is fairly easy to drive from the port description but an immense amount of pain for anyone to try to generate from anywhere else. My company might be building a local package set that disables (or enables) an option that is the default upstream, so when I build something that uses build2 I may need to statically link a version of some library rather than using the system one, even though the default for a normal FreeBSD user would be to just depend on the package.

                                                                      While this might look a bit impure (why does exe{prog1} depends on bar.h even though none of its sources use it), this works as expected. In particular, given a fully up-to-date build, if you remove foo.h, only exe{prog1} will be rebuilt. The mental model here is that the headers you list as prerequisites of an executable or library are a “pool” from which its source can “pick” what they need.

                                                                      That is exactly what I want, nice! It feels like a basic thing for a C/C++ build system, yet it’s something I’ve not seen well supported anywhere else.

                                                                      Sounds good. If this is public (or I can be granted access), I could even help.

                                                                      It isn’t yet, hopefully later in the year…

                                                                      Of course, the thing I’d really like to do (if I ever find myself with a few months of nothing to do) is replace the awful FreeBSD build system with something tolerable and it looks like build2 would be expressive enough for that. It has some fun things like needing to build the compiler that it then uses for later build steps, but it sounds as if build2 was designed with that kind of thing in mind.

                                                    3. 2

                                                      Not all small projects will necessarily grow into a large project. The trick is recognizing when or if the project will outgrow its infrastructure. Makefiles have a much lower conceptual burden, because Makefiles very concretely describe how you want your build system to run; but they suffer when you try to add abstractions to them, to support things like different toolchains, or creating the compilation database (I assume you’ve seen bear?). If you need your build described more abstractly (like, if you need to do different things with the dependency tree than simply build), then a different build tool will work better for you. But it can be hard to understand what the build tool is actually doing, and how it decided to do it. There’s no global answer.

                                                      1. 4

                                                        This is the CMake file that you need for a trivial C/C++ project:

                                                        cmake_minimum_required(VERSION 3.20)
                                                        add_executable(my-prog src1.c src2.cc)
                                                        

                                                        That’s it. That gives you targets to make my-prog, to clean the build, and will work on Windows, *NIX, or any other system that has a vaguely GCC or MSVC-like toolchain, supports debug and release builds, and generates a compile_commands.json for my editor to consume. If I want to add a dependency, let’s say on zstd, then it becomes:

                                                        find_package(zstd CONFIG REQUIRED)
                                                        cmake_minimum_required(VERSION 3.20)
                                                        add_executable(my-prog src1.c src2.cc)
                                                        target_link_libraries(my-prog PRIVATE zstd::libzstd_static)
                                                        

                                                        This will work with system packages, or with something like vcpkg installing a local copy of a specific version for reproduceable builds.

                                                        Even for a simple project, the equivalent bmake file is about as complex and won’t let you target something like AIX or Windows without a lot more work, doesn’t support cross-compilation without some extra hoop jumping, and so on.

                                                        1. 1

                                                          The common Makefile for this use case will be more lines of code (I never use bsd.prog.mk, etc., unless I’m actually working on the OS), but I think the word “complex” here obscures something important: that a Makefile can be considered simpler due to a very simple execution model, or a CMakeLists.txt can be considered simpler since it describes the compilation process more abstractly, allowing it to do a lot more with less.

                                                          For an example of why I think Makefile‘s are conceptually simpler, it is just as easy to use a Makefile with custom build tools as it is to compile C code. It’s much easier to understand:

                                                          %.c : %.precursor
                                                              python my_tool.py %< -o $@
                                                          

                                                          than it is to figure out how to use https://cmake.org/cmake/help/latest/command/add_custom_command.html to similar effect; or to try to act like a first-class citizen, and make add_executable to work with .precursor files.

                                                      2. 2

                                                        CMake gets a lot of criticism, but I think a fair share of its problems it’s just that people haven’t stopped to learn about the tool. It’s a second-class language for some people, just like CSS.

                                                        1. 2

                                                          There’s an association issue here too. Compiling C++ sucks. It is significantly trickier than many other languages. The dependency ecosystem is far less automated too. Many dependencies are incorporated into a conglomerate project. The build needs of those depdencies come along for the ride. The problems with all of these constituents are exposed as a symptom of the top line build utility for the parent project. If cmake had made it’s first inroads with another language it would likely have a more nuanced reputation. Not that it doesn’t bring its own problems too, but it surely takes the blame for a lot of C++’s problems.

                                                      1. 12

                                                        Every new wave of security features causes a cry of IMPENDING DOOM like this from someone from the Linux community, and yet somehow we persevere.

                                                        I don’t have any in depth knowledge of Pluton but is it far fetched to think this will be more of the same?

                                                        1. 37

                                                          Or, depending on your perspective, every new warning sign causes a cry of IMPENDING DOOM like this, and we keep ignoring them.

                                                          The goalposts have already moved somewhat: as I’m typing the top thread right here is a bunch of people wandering off-topic to talk about how DRM is good actually, while down below a Microsoft employee is lecturing us about how it’s a basic requirement for any modern computer to run only software signed by Microsoft (but don’t worry, Microsoft currently deigns to sign software for some Linux distributors). I think both of these opinions would have been considered laughable a decade ago.

                                                          I think the reason for the gap between the reactions you’re seeing from commentators and the practical results you’re experiencing is optimism. Microsoft effectively controls what software many of us can run on our PCs now, and this step is likely to extend their reach. Yes, they currently give us permission to run quite a lot of software, but they could change their minds at any time. The permissiveness of today’s policy translates into hardware which lets us execute arbitrary code with a little effort, but auto-updating firmware seems like an effective way to close that loophole.

                                                          So some people are angry because Microsoft is establishing more control of the platform, while your world keeps turning because they haven’t exercised that control in ways that interfere with you. Maybe they never will. I’m not that optimistic.

                                                          1. 17

                                                            at the risk of invoking me-too flags, thank you, I think you perfectly captured my sentiment with this comment. microsoft absolutely cannot be trusted to continue letting people use their[0] computers as they see fit in the future.

                                                            1. “their”… who owns it? you paid for it, but I’d argue that microsoft owns it.
                                                            1. 3

                                                              Or, depending on your perspective, every new warning sign causes a cry of IMPENDING DOOM like this, and we keep ignoring them.

                                                              It is an easily-verifiable fact that the Free Software community has a history of significantly over-dramatizing security mechanisms in computing systems. For example, Richard Stallman’s own note that once was inserted into the documentation for GNU su, and which may easily be found online with the assistance of one’s preferred search mechanism, and which read:

                                                              Why GNU su does not support the `wheel’ group

                                                              (This section is by Richard Stallman.)

                                                              Sometimes a few of the users try to hold total power over all the rest. For example, in 1984, a few users at the MIT AI lab decided to seize power by changing the operator password on the Twenex system and keeping it secret from everyone else. (I was able to thwart this coup and give power back to the users by patching the kernel, but I wouldn’t know how to do that in Unix.)

                                                              However, occasionally the rulers do tell someone. Under the usual su mechanism, once someone learns the root password who sympathizes with the ordinary users, he or she can tell the rest. The “wheel group” feature would make this impossible, and thus cement the power of the rulers.

                                                              I’m on the side of the masses, not that of the rulers. If you are used to supporting the bosses and sysadmins in whatever they do, you might find this idea strange at first.

                                                              And this style of reaction has continued into the present day. The prose, and the claim to be standing for the oppressed many against the tyrannical oppressive few, is not too far off from contemporary samples which can be found in this very thread.

                                                              But the simple fact is that people are tired of being afraid of every email and text message they receive and what it might do to their computer. Tired of being afraid of what any random web page they visit might do to their computer. Tired of being afraid of just having a computing device turned on and internet-connected, lest it be taken over remotely, which remains a routine occurrence.

                                                              The only way to make malicious takeover of the system more difficult is by putting barriers in the way of anyone who would seek to take over the system, even if their purposes are not malicious. This has been the trend in many manufacturers’ computing devices and many operating system vendors’ software for many years now. So we see systems which have a “sealed” and cryptographically verified system volume. Or systems which default to sandboxing applications or restricting their access to the filesystem and sensitive APIs. Or systems which default to requiring cryptographic signatures from identified developers as a precondition of running an executable. And on and on – all of these have contributed to significant improvement in the average security of such systems in the hands of ordinary non-technical users.

                                                              And every one of them has also offered some type of toggle or other mechanism to allow a motivated and sufficiently competent user to override the default behavior.

                                                              Yet every one of them has been denounced by those who are of Stallman’s way of thinking. Every one of them, we have been assured, is, this time, the final step before the frog is boiled and the manufacturers finally stop providing a mechanism to get around the default behavior.

                                                              No sufficient explanation for why manufacturers would want to do this is ever provided. We are simply told that they are fighting some sort of “war” against some thing called “general-purpose computing”, and that the “users” must fight back. I find it extremely difficult to take such arguments seriously; they tend to require almost cartoonish levels of overt villainy on the part of manufacturers and vendors, and disregard factors like the known necessity of enticing developers to a platform in order to make it attractive to end users.

                                                              Nor is any explanation ever accepted of the tradeoffs inherent in providing a true general-use system which can serve the security needs of the vast majority of non-technical users while also not being too offensive to the sensibilities of the minority of extremely-technical users. Asking us to take on the burden of looking up how to turn off a tamper-proofing mechanism is a small thing compared to the pain and suffering that would be imposed on everyone else if all such mechanisms were done away with.

                                                              For this reason I do not accept and never will accept either the “impending doom” claims, nor the related claims of a “war” on “general-purpose computing”, and therefore I rebut any and all who advance such claims.

                                                              1. 5

                                                                I have rarely seen anyone rely as much on strawmen in a discussion.

                                                                We already have our thread so I responded there.. but I hope you can agree that it is a systemic risk to have a central point of failure? Even if it is just in the abstract. Sure there is no cartoon villain at the moment but why make space for one to appear? It is not necessary.

                                                            2. 11

                                                              I don’t think this is an overreaction, or something to become complacent about. microsoft has a history of anticompetitive behavior, e.g. trying to “nuke” existing installations of linux by replacing bootloaders on update, and IIRC in the early days of secure boot they originally didn’t want to provide any way for booting alternative OSes either by disabling secure boot or using alternate keys (but, again IIRC… they were kinda forced to at least on x86.)

                                                              1. 3

                                                                Fair enough.

                                                                So what would you suggest that people do? The only thing that comes to mind for me is to vote with your wallet and ONLY buy machines that use open standards.

                                                                That’s easier than it used to be, System76 sells CoreBoot based PCs.

                                                                1. 6

                                                                  Yeah, pretty much that, since government intervention is completely out of the question (at least in the US).

                                                                  System76, Framework, Purism, HP(!), Dell, are just some of the ones I can think of off the top of my head that are selling systems that run Linux, though only System76 and Purism are the only ones I know of that use coreboot…

                                                                  1. 3

                                                                    Is running Linux enough?

                                                                    Like, what if MSFT cozied up to one of the distro owners like Ubuntu and bundled all the right magic bits so it booted on Pluton chips?

                                                                    My point here is not to challenge what you’re saying at all, just that in my opinion vendor lock-in and user freedom are a sliding scale where everyone gets to choose their own comfort level.

                                                                    I like fully open systems which is why I support System76 and have a Thelio desktop from them sitting next to me here.

                                                                    However I also have a Lenovo laptop which doesn’t currently run Linux, not due to any boot level shenanigans but because of a bug in the wifi driver.

                                                                    My wife has a thoroughly locked down M1 Macbook Air, because the computer is an appliance and she would literally drop into a coma if forced to deal with the details of installing Linux on any machine :)

                                                                    1. 3

                                                                      Is running Linux enough?

                                                                      No, it’s not, but it at least (currently) demonstrates that microsoft hasn’t wrapped their tentacles around the OEM (yet).

                                                                      My point here is not to challenge what you’re saying at all, just that in my opinion vendor lock-in and user freedom are a sliding scale where everyone gets to choose their own comfort level.

                                                                      That’s a good point. I just don’t like to see companies like microsoft impose hard limits to how much you can slide on the scale. If folks want a microsoft appliance, then microsoft is already an OEM (the surface stuff), there’s no reason to impose restrictions on all OEMs that ship windows.

                                                              2. 2

                                                                Anyone remember Palladium?

                                                              1. 8

                                                                Just because no one here has mentioned Safari (that I’ve spotted):

                                                                I use Firefox on MacOS because I want it to continue to exist and, selfishly, it does work really well for me. I have no complaints at all. I don’t notice speed differences when I try other browsers, and I like the small selection of add-ons I use, most of which are probably available on other browsers.

                                                                I don’t like Google’s tracking or their near monopoly on browser engines (ironic as I did some work on the foundations of konqueror once, though not khtml itself) so I avoid Chrom(ium) unless I can’t get something to work in Firefox, which has happened once in the past five years or so.

                                                                Anyone use Safari and swear by it? I have an ad blocker for it which seems to work, and also the 1Password extension, so I could use it, but thanks to M1 and the ability to fully charge my Air from a portable external battery when needed, I’m not concerned about saving battery as much as I was. Is there a reason to use Safari once you know that Firefox exists and don’t mind installing it on each new machine?

                                                                1. 3

                                                                  Safari has always had the smoothest performance for me. It’s the only browser I use. Pedantic complaint, but simply resizing a window has visible lag on Firefox and Chrome whereas I can resize a window at 120 fps under Safari with no visible lag in page layout, etc. I use AdGuard for blocking ads and have had no issues.

                                                                  Been meaning to check out Orion as well, but haven’t been compelled enough to switch just yet.

                                                                  1. 2

                                                                    I’m fairly satisfied with Safari, but I really wish I could have straight up uBlock Origin.

                                                                    1. 1

                                                                      I try to follow the “When in Rome” approach for most native apps, browsers and tools. On my work laptop (Mac), use Safari. At home, use Firefox. On a Windows machine, use Edge or whatever. Same approach goes with (most) tooling configurations, use the defaults as much as possible. As someone who constantly reconfigures Vim, a lot (really… a lot) of time can get sinked into the customization my digital experience. Some things, like security, are uncompromising, but if my goal is to generally get things done efficiently, then reducing my setup overhead, app/tooling ecosystem, and number of cloud services is step number one.

                                                                      I always think back to an old coworker of mine who’s laptop shit-the-bed one morning and by that afternoon, he was back to working, on all channels, on a brand new machine. Of course, cloud backups are a thing, but sometimes it’s easier to be like water

                                                                    1. 18

                                                                      I really don’t understand these things. A few of the online conferences during the pandemic had 3D things and they were vastly less efficient than a simple menu to navigate. I really liked GatherTown, but it explicitly gave a 2D top-down (8-bit Zelda-like) experience, which let me see a lot more of the environment than an immersive environment. The great thing about virtual environments is that they’re not limited to the constraints of real space.

                                                                      Jef Raskin wrote that games are, by design, bad UIs. The simplest UI for an interactive game is a button that you press and then win. The point of a game interface is to hide that from you and make you do things that are more difficult to accomplish your task. Any time someone designs a UI that looks like a game, there’s a good chance that I’m in for a bad experience (even with GatherTown, I’ve managed to get lost in the environment and not be able to find the room I’m supposed to go to, which wouldn’t happen with a simple hyperlinked list of meeting rooms).

                                                                      1. 7

                                                                        I have to agree (not having used this interfaces tho!) IF people go to conferences, is trying to find the next room really what they want to replicate? Same with “3d offices” where avatars sit in meetings. Why would anyone want to replicate this experience?

                                                                        In a few years we will see the “metaverse” (and other 3d envs) as the culmination of the low-interest rate twenty-teens exuberance. Along with fintech and NFTs.

                                                                        1. 4

                                                                          In a few years we will see the “metaverse” (and other 3d envs) as the culmination of the low-interest rate twenty-teens exuberance. Along with fintech and NFTs.

                                                                          People have been playing MMORPGs and games like Minecraft for decades. World of Warcraft has been hugely popular and folks met lifelong friends and partners there. I think the ship has sailed on the 3d env part. NFTs and Fintech are not related to the post, but if you’re trying to be a cynical tech snarker, be my guest, that’s certainly not going away on the internet.

                                                                          1. 2

                                                                            I agree on games, I love games myself (but I don’t play MMORPGs). That’s daved_chisnall’s point too, game 3d works in games well, but games != work for the most part. 3D in games is not going away.

                                                                            I think Meta would be more successful marketing 3d to Facebook - where people hang out after work (unlike our cynical set, people love Facebook! it’s where their friends are) but instead they needed to show “growth potential” and highlighted a dystopian 3d workplace. And the press dutifully reported it as “the future of work”. Just like they reported NFTs to be “the future of finance”.

                                                                            I am not cynical by nature but it is obvious at lot of the mainstream press has been hijacked by people who are very very good at marketing bullshit.

                                                                            1. 2

                                                                              I think Meta would be more successful marketing 3d to Facebook - where people hang out after work (unlike our cynical set, people love Facebook! it’s where their friends are) but instead they needed to show “growth potential” and highlighted a dystopian 3d workplace. And the press dutifully reported it as “the future of work”. Just like they reported NFTs to be “the future of finance”.

                                                                              But this has nothing to do with Meta. This is Mozilla Hubs, a 3D room project designed to run in the browser. Mozilla started on the project before Facebook rebranded to Meta. The project is FOSS and unlike Meta’s product or VRChat, is completely usable in the browser, and works well without a VR headset, even on your smartphone!

                                                                              I hate to ask, but did you go to the posted link? I really don’t see how criticisms of corporate marketing are relevant here unless you’re more interested in trying to make a point than read the link. From what I’ve seen most uses of Hubs has been for classroom experiences or social experiences, vanishingly little for work related ones.

                                                                              1. 1

                                                                                I was replying about the use of 3d in conferences and work in general and the difference between work and games. I agree discussing marketing is not on topic!

                                                                        2. 6

                                                                          Have you played something like Half Life Alyx? During one of my playthroughs, one of those spidery headcrabs of yore came swooping by. Instantly and as if through sheer instinct, I grabbed it mid flight and held it hanging by one of its legs. It looked seriously annoyed by the whole affair.

                                                                          Swinging it around as if imitating the rotor blades of a helicopter worked just fine (albeit not with the desired woosh-woosh sound). Putting the crab inside of a bucket, and putting the bucket upside down on the ground had the crab-bucket crawl away. Experiences like that ‘sold’ VR as HCI for me. Nowhere in the process did I think of a ‘press G to grab’ or ‘F to pay respects’ like setup - “I” was the input, the ‘living data’ the interface.

                                                                          One of the many demos I held here for poor unsuspecting chums, was via Valve ‘the lab’. It has this one part with a little robot dog running around being adorable. You could throw objects and it would scurry after them, return and placing them at your feet. Anyhow, for a lark someone kneeled down and tried to pet it. It rolled over and got some belly scratches. The person subsequently removed the HMD and snuck away for a crying session. Former dog owner.

                                                                          Another chumette took a deep sea dive via ‘The deep’, where the schene of a sea floor slumbering whale skeleton transitioned into a starry underwater sky of glowing jellyfish. The person froze and shook in horror. Trypophobia apparently, who knew.

                                                                          My point is that the right mix of these things can strike at something unguarded and primal; possibly also tap into cognition that sees deeper patterns in ongoing computing for inferences previously unheard of. What Hubs is doing here has the potential of doing none of that. Excel fame ‘The Hall of Tortured Souls’ meets VRML.

                                                                          1. 6

                                                                            For conferences i agree that an accessible top-down 2d design might be the way to go. But for groups of people just hanging around, expressing themselves, the extra degrees of freedom afforded by 3D VR spaces are invaluable. There is a reason people flock to VRChat: body language.

                                                                            1. 2

                                                                              yeah it’s fun to shoot the shit with people you know in VR. the ability to see in 3D or grab virtual objects didn’t wow me, but seeing someone talk and gesture in VRChat (and being able to do the same) blew my mind.

                                                                              1. 1

                                                                                I think this is especially true for groups of people who have become familiar with each other’s physical presence in other venues, be it work in an office, meet-ups, or past conferences. Hard to scale any experience to large groups but not every technology has to scale to large groups to be a tool worthy of our use.

                                                                              2. 3

                                                                                Jef Raskin wrote that games are, by design, bad UIs. The simplest UI for an interactive game is a button that you press and then win.

                                                                                I wonder what he would think of things like Cookie Clicker…

                                                                                1. 1

                                                                                  Or Progress Quest! http://progressquest.com/

                                                                                2. 1

                                                                                  The great thing about virtual environments is that they’re not limited to the constraints of real space.

                                                                                  We just have different constraints instead. When I’m in a shared space working on things, I can often walk over and start chatting with a friend. Some of my favorite experiences playing games or working on projects with friends has been the ability to just casually start a conversation. Yeah sometimes it meant that the project went nowhere and we went to beers, but that was a valuable, enjoyable experience. When I’m in a VC, there’s no such thing. I’m either broadcasting to the entire room or I’m not talking. Breakout rooms or sub-channels or whatever you want to call them just aren’t the same, you can’t form organic connection that way. On the other hand I have fond memories of chatting with a random person (eventual friend) at a personal hackathon about LaTeX even though most of the rest of the group had never used LaTeX for much at all.

                                                                                  even with GatherTown, I’ve managed to get lost in the environment and not be able to find the room I’m supposed to go to, which wouldn’t happen with a simple hyperlinked list of meeting rooms

                                                                                  Folks in XR/Metaverse/3D spaces talk about offering “cues” in rooms/scenes to help folks congregate, so this is a known pain point. Humans spend their whole lives in physical spaces and humans have been creating physical spaces for almost our entire history, so we know how this works very well. In metaverse, not so much. Also, this depends on the context. If efficiency is the goal, then sure, there’s no point getting lost. And perhaps when you’re working with someone for a large employer where your only point of union is that you are paid by the same large employer then sure, you want to get your work done and go home to your family/friends, so you just want to get into a meeting room and get done with it. But if encouraging serendipity of community is the goal, then getting lost in the environment is probably a bit more of a feature than a bug.

                                                                                  Some of this I suspect is a personality thing. Some people treat digital spaces as specific places where they want to get things done; they want to make some progress on some code they’ve written, get their finances in order, watch the video they’re searching for. Others perhaps want to simply “roam” digitally. These folks are going to be the ones roaming around in MMORPGs or Minecraft worlds.

                                                                                  Personally, working in the fully remote era of COVID has become quite alienating. In the past I met friends even partners through coworkers at work. Now, we see each other as talking heads or sources of audio, exchange some links, and get done with it. And having had a bout of COVID, I realize there are times when I want to be with friends of mine but travel is just not feasible. Chats and VCs are just not the same.

                                                                                  I might be in the minority though. And yeah if you’re the “My life is rich enough with just my close friends and family” type, then virtual socializing probably will never be for you.

                                                                                  1. 2

                                                                                    Some of my favorite experiences playing games or working on projects with friends has been the ability to just casually start a conversation

                                                                                    GatherTown, which I mentioned above, does this very well. As your avatar approaches someone, you hear their audio. As you get closer, you see their video. You can transition from this into a full video conferencing mode, or just have their video feed above.

                                                                                    1. 1

                                                                                      Yup I’ve used GatherTown and I’m a fan! I did find the 2D-ness of the thing a bit disorienting, but for work conference events I really enjoy it. I attended a pandemic birthday party in GatherTown and I enjoyed it quite a bit also.

                                                                                1. 8

                                                                                  I feel UI tends to be a Conway’s law-esque manifestation of its backend. Some program designs will make different UI approaches easier than others. I think constantly how AppKit vs. Win32 encourages certain patterns (i.e. focus vs. first responder, latter making implementing a universal Edit menu easier, etc.).

                                                                                  Also: API design is UI design.

                                                                                  1. 5

                                                                                    PR piece, but the buried lede here is lockdown mode, which offers much stricter security for those with stronger threat models.

                                                                                    1. 3

                                                                                      This Ars Technica article has more about Lockdown Mode. I like this bit:

                                                                                      It’s useful that Apple is upfront about the extra friction Lockdown adds to the user experience because it underscores what every security professional or hobbyist knows: Security always results in a trade-off with usability

                                                                                    1. 2

                                                                                      There is apparently a Mac contender: https://shrugs.app/

                                                                                      Though it seems pretty crap in comparison to RC.

                                                                                      1. 1

                                                                                        Ripcord is built for macOS too.

                                                                                        1. 1

                                                                                          Not a native Mac app; you’d lose a lot of native features. Qt is better than Electron, but not by much.

                                                                                          1. 1

                                                                                            Which macOS features are unavailable when using Qt?

                                                                                            1. 1

                                                                                              Oh, you can integrate the features - it just takes a lot of work to do so, moreso than a native app. What’s more annoying is all the annoying subtleties, everything from editing controls to menu bar behaviour.

                                                                                      1. 12

                                                                                        Oh, we’re finally bringing back FrontPage and iWeb?

                                                                                        1. 5

                                                                                          Eh, almost. As far as I can tell, this imposes some file/directory structure constraints and has limited HTML, template and theme editing features. So I’d say we’re bringing half of WordWideWeb back for now :-). Took us about five years to get from that to FrontPage so, adjusting for modern software boilerplate and maintenance requirements I’d say give it another… ten years or so :-).

                                                                                          On the bright side the HTML code that Publii produces looks considerably less atrocious than anything FrontPage ever did so maybe it’s worth waiting these ten years or so!

                                                                                          1. 5

                                                                                            Yeah I get that it feels full circle but I think this is a bit different. I’ve never used FrontPage but I remember iWeb feeling more focused on WYSIWYG web design. Publii feels more like a CMS with all the features you’d expect for a blog: posts, authors, tags, categories, excerpts, feeds, etc. The default theme looks nice, works on mobile, supports dark mode, and provides the exact right level of configurability for my use case (change colors, heading image, date format, pagination, etc.) without having to touch code.

                                                                                          1. 20

                                                                                            Honestly, I don’t really have many problems with GitHub. It works decently, and if it goes to hell, I can just push somewhere else and deal with the fallout later. Actually finding projects/code is useful with code search (ignoring ML sludge), and I really don’t see how people can get addicted to the whole stars thing. Besides, if it’s public, something like Copilot will snarf it anyways.

                                                                                            1. 23

                                                                                              I was a long-time holdout from GitHub. I pushed every project I was contributing to and every company that I worked for to avoid it because I don’t like centralised systems that put control in a single location. I eventually gave up for two reasons:

                                                                                              It’s fairly easy to migrate from GitHub if you ever actually want to. Git is intrinsically decentralised. GitHub Pages and even GitHub wikis are stored in git and so can just be cloned and take elsewhere (if you’re sensible, you’ll have a cron job to do this to another machine for contingency planning). Even GitHub Issues are exposed via an API in machine-readable format, so you can take all of this away as well. I’d love to see folks that are concerned about GitHub provide tooling that lets me keep a backup of everything associated with GitHub in a format that’s easy to import into other systems. A lot of my concerns about GitHub are hypothetical: in general, centralised power structures and systems with strong network effects end up being abused. Making it easy to move mitigates a lot of this, without requiring you to actually move.

                                                                                              The projects I put on GitHub got a lot more contributions than the ones hosted elsewhere. These ranged from useless bug reports, through engaged bug reports with useful test cases, up to folks actively contributing significant new features. I think the Free Software movement often shoots itself in the foot by refusing to compromise. If your goal is to increase the amount of Free Software in the world, then the highest impact way of doing that is to make it easy for anyone to contribute to Free Software. In the short term, that may mean meeting them where they are, on proprietary operating systems or other platforms. The FSF used to understand this: the entire GNU project began providing a userland that ran on proprietary kernels and gradually replaced everything. No one wants to throw everything away and move to an unfinished Free Software platform, but if you can gradually increase the proportion of Free Software that they use then there becomes a point where it’s easy for them to discard the last few proprietary bits. If you insist on ideological purity then they just give up and stay in a mostly or fully proprietary ecosystem.

                                                                                              1. 2

                                                                                                Even if it’s possible, even easy, to copy your content from Github when they cross some threshold you’re no longer ok with, there will be very little to copy to unless we somehow sustain development of alternatives during the time it takes to reach that threshold.

                                                                                                IMHO it would be better if the default was at least ”one of the three most popular” rather than ”Github, because that’s what everyone uses”.

                                                                                              2. 7

                                                                                                If you use their issue tracker, pull requests and so on, that will be voided too. That isn’t easily pushable to another git host. Such things can tell a lot about a project and the process of it getting there, so it would be sad if that was lost.

                                                                                              1. 3

                                                                                                I seriously want to use something like Thunderbird, especially since mail/calendar is one of the main blockers to my using Linux, but I don’t understand what anyone uses it with. What email provider and protocol are you all using with it? Does everyone use it with gmail?

                                                                                                The lack of Exchange support is what keeps me from switching and every Exchange-supporting client on Linux seems to be abandonware (eg Hiri/Mailspring). I’m not willing to switch to gmail.

                                                                                                1. 10

                                                                                                  Works with any decent IMAP server as far as I know, including Outlook.

                                                                                                  1. 6

                                                                                                    The lack of Exchange support is what keeps me from switching and every Exchange-supporting client on Linux seems to be abandonware (eg Hiri/Mailspring).

                                                                                                    For many years, I use DavMail + Thunderbird to address the same use-case of yours, and it works flawlessly. DavMail even provides a step-by-step documentation on how to set it up with Thunderbird.

                                                                                                    1. 5

                                                                                                      Works great with Fastmail including calendar

                                                                                                      1. 3

                                                                                                        Migadu

                                                                                                        1. 3

                                                                                                          I’m using it with OpenSMTPD and Dovecot. Not using the calendar.

                                                                                                          1. 3

                                                                                                            Works for me, with Gmail, Hotmail, AOL Mail, Yahoo Mail, and in $JOB-1, with Office365, and with Novell Groupwise. Of course it also works with every FOSS POP3/IMAP combo I’ve ever thrown it at.

                                                                                                            I tried over a dozen FOSS mail clients when I started at $JOB-1. The only one I stuck with for more than a few days was CLAWS, but its single-threading became a deal-breaker. I went back to Thunderbird and I still use it today, as I have for most of the time it’s existed as a standalone product.

                                                                                                            1. 3

                                                                                                              On which note, I’d post my review from the Register when it goes live (any minute now), but AFAICT Lobste.rs still bans the Reg.

                                                                                                              As that is $JOB, this makes me sad.

                                                                                                              1. 2

                                                                                                                the irony of a tech site banning the reg! sad state of affairs indeed :(

                                                                                                                1. 2

                                                                                                                  I agree.

                                                                                                                  OTOH, although I do want feedback and discussion, I also do not want to spam people with self-promotion. :-/

                                                                                                                  Some of my stories have done very well on The Orange Site and on Slashdot, so it’s all good, I suppose.

                                                                                                            2. 2

                                                                                                              Doesn’t Evolution have good EWS and MAPI support?

                                                                                                              1. 2

                                                                                                                Thunderbird supports Exchange with a plugin. It is “paid”, but trivial to circumvent by extracting the extension.

                                                                                                                1. 2

                                                                                                                  The lack of Exchange support is what keeps me from switching and every Exchange-supporting client on Linux seems to be abandonware (eg Hiri/Mailspring). I’m not willing to switch to gmail.

                                                                                                                  Evolution is the only Exchange client on Linux that works for me. It’s pretty OK, though I’ve had issues with 365 authentication.

                                                                                                                1. 9

                                                                                                                  I don’t get the fascination about computer-generated content. I haven’t delved much in it but I’ve seen glimpses of the DALL-E pictures, before that GPT-3 texts, all the various this-whatever-doesn’t-exist.

                                                                                                                  I don’t get the interest. I only see empty attempts at mimicking conscience that ultimately fails to produce anything meaningful. I have the same feeling that I have when I listen to someone very good at talking without a purpose. Some people (especially politicians but not only) are very good at talking for a long time, catching the ear without ever really telling anything. It’s quite fascinating when you realize that the person has in fact only been using glue words, ideas and sentences but there’s no substance at all when you take out these fillers.

                                                                                                                  It’s what I see in all of this. We got to a point where we make computers to churn out filler content for our mind, but there’s no nutritional value in it. We’re taking out the human producer of memes and honestly I’m a bit terrified we’ll end up brain-deadly consuming content produced by things. What happens when the art/ideas/entertainment/… is made without “soul”? What is the point of all this?

                                                                                                                  I’m not very good at ordering and communicating my thoughts myself but I’m very scared already when my 12 years old son gets stuck scrolling short videos, the idea of taking the human “soul” out crushes my hope for the future of our species.

                                                                                                                  1. 13

                                                                                                                    I’m excited about this for two principle reasons:

                                                                                                                    1. It’s fun. SO much fun. Getting DALL-E to generate a heavy metal album cover made of pelicans made of lightning? I found that whole process deeply entertaining. See also the “fantasy breakfast taco” game I describe at the end of my post.
                                                                                                                    2. I see these tools fitting in the category of “bicycles for the mind”. They help me think about problems, and they do things like break me out of writer’s block and help me get started writing something. In DALL-E’s case it’s an imagination enhancer: I can visualize my ideas with a 20s delay.

                                                                                                                    Aside from those, here’s a use-case you may not have considered which I tried recently on GPT-3. If you’re on disability benefits and a government office cancels them and you need to write a letter - but you don’t have much experience writing formal letters. I tried the prompt “Write a letter to the benefits office asking why my disability claim was denied”. Here’s the result: https://gist.github.com/simonw/6e6080a2f51c834c13b475743ef50148

                                                                                                                    I find this a pretty convincing attempt. I could do a better job, but I’ve written a lot of letters in my time. Give it a prompt with specific details of your situation and you’ll get something even more useful.

                                                                                                                    1. 4

                                                                                                                      In DALL-E’s case it’s an imagination enhancer: I can visualize my ideas with a 20s delay.

                                                                                                                      I’ve been using the free VQGAN-CLIP[1] to generate things like “Moomins as Dark Souls in the style of linocut / watercolour / screenprint etc.” to give me interesting things to practice linocutting, watercolours, acrylic painting, etc.

                                                                                                                      [1] https://github.com/nerdyrodent/VQGAN-CLIP

                                                                                                                    2. 8

                                                                                                                      Why do you expect it to have a vague notion of a “soul”? It’s a tool. It’s not an artist, it’s an automated Photoshop.

                                                                                                                      How do you feel about recorded music? Music used to exist only as a live human performance, and now we have soulless machines playing it. New music can be made with button presses, without fine motor skills of playing an instrument. Now we can create paintings without motor skills of using a brush.

                                                                                                                      To me DALL-E is a step as big as photography. Before cameras if you wanted to capture what you see, you had to draw it manually. With cameras it’s as trivial as pressing a button. Now we have the same power for illustrations and imaginary scenes.

                                                                                                                      Selfies have disrupted portrait painters, and this without a doubt will be disruptive for artists and photographers. Short term such commoditization sucks for creators, and the way ML is done is exploitative. Long term it means abundance of what used to be scarce, and that’s not necessarily a bad thing.

                                                                                                                      1. 3

                                                                                                                        Basically - I think at most, things like this will serve two purposes:

                                                                                                                        1. Inspiration stuff for artists - look at some ideas, see new possible connections.
                                                                                                                        2. Pornography (not in the sense you’re used to, but “meets my specific thing”)/pot-boilers; it follows a situation of like “I want X character in Y setting”, and generates something mostly coherent, but in a bland way. But maybe that’s all someone wants…
                                                                                                                        3. Shitposting.

                                                                                                                        I think these right now are devoid of much “soul” for lack of a better term - it’s impressive they can follow the prompt, but it’s impassionate and at times they feel like they’ve been painted by someone with dementia.

                                                                                                                        1. 1

                                                                                                                          Why is it seen as filler content? A lot of what the author ends up creating are descriptions that came from a person, and the AI is just doing its best to visualize it. There are thousands of instances of a description - why not generate them all?

                                                                                                                          I tell this to people from time to time that life is just state traversal… and AI helps traversing it