Threads for jacereda

  1. 7

    I wonder if APEs can be used to open some elementary framebuffer/OpenGL context to make little GUI programs / games “actually portable”.

    1. 8

      It can be done, it’s a bit convoluted but I got it running on FreeBSD, NetBSD, Windows and Linux. OpenBSD requires some more work due to their strict syscall policies. I didn’t attempt macOS, but I guess it could be done.

      https://github.com/jacereda/cosmogfx

      1. 1

        Put it in the list, congrats for being the first GUI to appear

        https://github.com/shmup/awesome-cosmopolitan#tilting-at-windmills

        We have a fellow in the Discord doing really nice TUI work, too. Either inspired by, or a port of, dflat (TUI with windowing)

    1. 1

      Perhaps the Xwayland issue is addressed by this? https://gitlab.freedesktop.org/xorg/xserver/-/merge_requests/248

      1. 10

        I’ve been having a pretty good time digging in to NixOS over the last month or so. Got a bit bogged down trying to declaratively configure applications I seldom is actually use via home-manager, but that’s on me, not the tools.

        I’m currently cranking through this upgrade and I’m not sure I want to run the bleeding edge release again any time soon, as it appears a number of large packages (like, WebKit-sized) don’t have cached binaries yet. My nixos-rebuild command for the upgrade has been chugging away for a couple of hours now and doesn’t appear to be all that close to done.

        On the plus side, I know I can roll back at any time, which means I don’t feel at all stuck if I want to hit eject and keep running on 20.09 a while longer. So, yay NixOS, I think? Just maybe don’t try the upgrade on a machine with less than 4-6 fast cores unless you really like watching the output of configure, cmake, and g++. ;)

        1. 6

          If you have a relatively powerful computer lying around, you can distribute the build on to it.

          https://nixos.wiki/wiki/Distributed_build

          I use this trick to delegate builds on my X1C7 to the more powerful P71. What happens here is that a nixos-rebuild switch (or even per-project nix-build; like compiling GHCJS projects!) in X1C7 will use ssh to do the actual builds on P71, and then download the built binary assets from it.

          If you do not have a powerful computer, there is also https://nixbuild.net/

          1. 3

            That’s a great idea; thanks!

            My primary dev machine is actually a P1 Gen2, so it’s not so much a question of CPU power or RAM as it is needing the keep the machine plugged in and awake throughout the build. :)

            That being said, I have a nice 8C/16T Xeon box in my office closet with gobs of RAM that would happily grind through these compilation cycles so I’ll definitely look at offloading builds.

          2. 2

            You can get cached binaries in unstable if you checkout one of the commits that got built by hydra: https://hydra.nixos.org/jobset/nixpkgs/trunk

            1. 4

              You can also get the commit hash from https://status.nixos.org - which displays it for all active channels.

              “Upgrading” then merely is a matter of changing this line in flake.nix and running nixos-rebuild switch.

          1. 2

            Gemini to the rescue. I’m spending more and more time these days in gemini and it certainly feels like the good old days.

            1. 11

              I think you’d be interested in Twizzler. I found it watching Peter Alvaro’s talk “What not where: Why a blue sky OS?”. It seems to address some of your points.

              I thought it was discussed on lobste.rs, but can’t find the link atm.

              1. 5

                That was fascinating – thank you for that link!

                Very much the same inspiration, but they’ve come at it from a radically different, more network-oriented direction. That is a very good thing.

                OTOH, it does preserve a shim of *nix compatibility, and whereas I wasn’t considering the network side of it at all – I reckon radical new ideas like that should, one hopes, be an emergent property of giving people a radically more powerful programming model than POSIX and C/things-rooted-in-C – the problem with finding a way to present a PMEM-centric OS to programmars via the medium of the existing stack is that, while it means instant familiarity, while it could win people’s attention far quicker… it doesn’t free us up at all from the morass of over 50 years of technical debt.

                At this point, in 2021, *nix is basically nothing but technical debt. The whole concept of a file-centric OS being adapted to a PMEM-centric machine… it almost breaks my hear, while simultaneously being awed by the brilliance of the thinking.

                It feels a bit like inventing a warp drive, and then showing it to the world by bolting it into a 1969 tractor frame. It’ll be a very very fast tractor, but at the same time, it’ll still be a tractor. It will never handle as well as an aeroplane with that engine would… and the aeroplane will be poor compared to a spaceship with it in. But you can kinda sorta turn an aeroplane into a spaceship. You can’t really turn a tractor into one.

                1. 4

                  (this is where I put on the “knows weird systems” hat)

                  Twizzler reminded me a lot of some prior art on single-level storage. They aren’t quite as distributed-first, but they’re certainly interesting to learn from. See the previous comment.

                  1. 1

                    I like the earlier comment! :-)

                    Yes, Twizzler certainly appears to be founded on some of the same ideas I have had. I am not claiming to have had blindingly profound, singular visions!

                    I have worked (very briefly) on AS/400 and I was certainly aware of it. Long before it, Multics shared some of the came concepts. As far as I can tell, the thing with these single-level-store designs is that basically they consider all storage as disk, whereas what I have in mind is treating it all as RAM.

                    So, yes, they’re very well-suited to IBM i, or a revived Multics in theory, and their kin, but I am looking in a slightly different direction.

                  2. 2

                    Loved that talk, brilliant. Thanks.

                    1. 2

                      That is mind blowing. Someone needs to post their 2020 paper. I’m still reeling.

                    1. 1

                      Nice! Is this a successor to Strand? Could you elaborate on the differences?

                      1. 2

                        Indeed, after learning a lot from implementing Strand, I was able to start fresh while still taking over some ideas,

                        From the implementation side: the runtime system is written in C (and not Forth), and the compiler generates x86-64 or arm32 assembler and the overall system is much faster. Calling C code is relatively straightforward. The system uses a refcounting GC (no pauses) and utilizes native threads (with no shared heap), but there are currently no facilities for distributed computing (yet). I was able to be slightly more faithful in the implementation of non-determinism: clause selection can now suspend on multiple variables (but matching still takes place sequentially).

                        From the language side: FLENG is very low level, but FGHC is basically Strand with full (output-) unification.

                      1. 24

                        I’m the original designer of the Atreus; happy to answer any questions.

                        1. 1

                          Why do you choose a fixed Split keyboard, instead of an adjustable split keyboard?

                          I can’t find the reason in your blog post neither in Atreus repository.

                          Notes:

                          • Fixed split, I mean such Atreus.
                          • Adjustable split, I mean such ErgoDox.
                          1. 1

                            Found. https://technomancy.us/172 Thanks for a very thorough history, reasoning, and decision.

                            I work from local coffee shops frequently, and the Advantage is just too clunky to toss in a bag and tote around.

                            Update: I’ve designed by own keyboard, which is meant to be a smaller, more travel-friendly complement to the Ergodox that shares a lot of its characteristics.

                          2. 1

                            Do you find it difficult to switch back and forth between the Atreus and a standard keyboard? I would be concerned that, given time, that it would be problematic given how many keys on the Atreus require using a layer. Would switch between keyboard types cause me to focus too much on the typing and not what I am typing.

                            1. 4

                              I’ve found that the weirder the weird keyboard is, the easier it is to switch between the weird one and a normal one. I used to use a standard qwerty 60% keyboard at work, with lots special bindings/layers, and a normal laptop at home. This was constantly problematic because I’d try to use my special arrow key bindings and they obviously didn’t work anywhere.

                              I’ve since switched to a kinesis for “work” (now my desk) and I no longer have any problems typing on my laptop because it’s so much different in every way. I also got an atreus and played around with it for a bit and I feel like it is likely in the “weird enough to be okay” territory due to the non-staggered key layout (forgot the technical term for this)

                              The only exception to this rule is that I can hardly use a computer if caps-lock isn’t bound to control, but that’s a different problem.

                              1. 1

                                I actually do this. Surprisingly enough, switching is mostly painless. I use Colemak on all keyboards, and muscle memory works itself out somehow, at least 95%.

                                1. 1

                                  My experience as a laptop user is that even though I greatly prefer the Atreus, having to plug it into my laptop means that I don’t use it 100% of the time; sometimes I’ll open my laptop for something really quick and won’t get the external keyboard plugged in. This is infrequent, but for me it has been enough for me to maintain my ability to type on a conventional keyboard.

                                  However, if you only very rarely use a laptop, this might not apply; can’t speak to that.

                                2. 1

                                  How easy is it to use a three-finger chord key? I have a keyboardio model 1 and find that three-finger chords - in particular the alt-shift-arrows that I use all the time in Eclipse - become an effectively impossible to type four-finger chord (since arrow keys need a modifier).

                                  1. 1

                                    Depends on which three fingers! I’ve been using ctrl-alt-letter chords since long before building the Atreus, because I’m an Emacs user. I don’t use any programs which require you to hold down shift while moving the cursor, so I can’t really say authoritatively, but alt-shift-arrows sounds like a key chord I would like to rebind to something less awkward even on a conventional keyboard.

                                    If that was a combo I had to use a lot and could not fix in software for some reason, I would probably remap my keyboard so that the alt key was adjacent to the shift key so that a single thumb could hit both.

                                  2. 1

                                    Got mine one month ago and I’m experimenting different layouts. I’m quite happy with just the main layer and a symbols+numbers+f-keys layer, and I still have a bunch of unused keys in the second layer.

                                    The software is nice, but I wish it allowed sending macros (for typing accented characters using a non-international US keymap, for instance). I might try menelaus at some point if you think it can handle that.

                                    The article mentions it was designed with a resting position for the pinkies at Z and ‘/’ in mind. Is that correct? I might experiment with that configuration using them also as shift modifiers when pressed.

                                    1. 1

                                      The software is nice, but I wish it allowed sending macros (for typing accented characters using a non-international US keymap, for instance).

                                      I’m like … 99% sure that this limitation is part of the GUI frontend, not the underlying firmware implementation itself. So the path of least resistance would be to build Kaleidoscope.

                                      I might try menelaus at some point if you think it can handle that.

                                      It definitely can’t handle that out of the box, but depending on your relative familiarity with C++ toolchains vs Scheme, it could conceivably be easier to implement that functionality to Menelaus vs configuring that as existing functionality in Kaleidoscope. Only one way to find out!

                                      1. 1

                                        What about the last bit? Do you rest the pinkies at Z and /?

                                        1. 1

                                          Oh, no I keep them on A and semicolon normally, but I hit the outermost top keys with my ring finger instead of the pinky. The pinky only hits A/Z and semicolon/slash (well, the dvorak equivalents of where those are on qwerty) and occasionally enter/esc; tho I usually use Ctrl-m instead of the enter key since it sends the ASCII equivalent of enter.

                                  1. 1

                                    Awesome, but I miss a link to the language homepage as well as information about the version used.

                                    1. 7

                                      I agree with Drew’s general sentiment here, but note that linkers can only remove code at the function level, and the number of functions a module uses is not a great indicator of the amount of underlying code.

                                      As an example, I maintain a small C runtime library. printf() is a common function for programs to use. But since linking is at the function level, there’s no way to remove code such as format specifiers that the program is not using. Since it doesn’t know what the output device is, code for all potential output devices needs to be included. Since my C runtime runs on Windows, that means character encoding support for UTF-16, UTF-8 and others, as well as VT processing code, including low level console calls.

                                      I’d expect the same general effect to be present in other libraries, including UI libraries. Even if the program knows it’s not going to perform certain operations on a window, the library is going to create an entire window with all of the data structures to support those operations. Things like C++ are particularly evil because once an object with virtual function pointers is loaded, the compiler is going to resolve those function pointers and all of their dependencies whether they are ever called or not.

                                      At $WORK this drives me crazy, because we have common static libraries that, when used, can add 300Kb-3Mb of code into a program, even if one or two functions are used.

                                      1. 9

                                        You have a good point. The library’s interface basically needs to be designed from the beginning for dead code elimination. One thing I like about newer languages like Rust and Zig, with their powerful compile-time metaprogramming features, is that you can often do this kind of design without sacrificing developer convenience. I suppose the same is true of modern C++ as well. The reason why printf is such a perfect counter-example is that C doesn’t have the language features to allow the developer convenience of printf without sacrificing dead code elimination.

                                        This reminds me of the last time I played with wxWidgets. A statically linked wxWidgets hello-world program on Windows was about 2.5 MB. I didn’t dig very deeply into this, but it seems that at least part of the problem is that wx’s window procedure automatically supports all kinds of features, such as printing and drag-and-drop, regardless of whether you use them. I suppose a toolkit designed for small statically linked executables would require the application developer to explicitly enable support for these things. And the window procedure, instead of having a giant switch statement, would do something like looking up the message ID in a map and dispatching to a callback. So when an application enabled support for, say, drag and drop, the necessary callbacks would be added to that map.

                                        1. 2

                                          Rust’s formatting machinery isn’t very easy to do DCE on either. https://jamesmunns.com/blog/fmt-unreasonably-expensive/

                                          The formatting machinery has to make the unfortunate call of either heavy monomorphization or heavy dynamic dispatch. If your executable is going to inevitably makes lots of calls to the formatter, the dynamic dispatch approach will result in less code duplication, but it makes it harder to do dead code elimination…

                                          1. 1

                                            Tangentially, it is very noticeable in the JS ecosystem that some libs have a lot of effort put into making tree shakers succeed at eliminating their code. By default, not so much.

                                          2. 3

                                            I agree with Drew’s general sentiment here, but note that linkers can only remove code at the function level, and the number of functions a module uses is not a great indicator of the amount of underlying code.

                                            I don’t think that’s the case if you compile with ‘-flto’. I’d assume the code generator is free to inline calls and remove things that can be stripped at the call site.

                                            1. 2

                                              BTW, ‘-flto’ is one of the great reasons to use static linking. It can turn suboptimal APIs (those using enum values for setters/getters, like glGet()) into something decent by removing the jump tables.

                                              1. 1

                                                Totally agree that link time code generation is a huge improvement in terms of the amount of dead code elimination that can occur. But at the same time, note the real limitations: it can inline getters and setters, and strip code out from a function call with a constant argument of a primitive data type, but can it strip code from printf? What happens with virtual function pointers - is it going to rearrange in memory structures when it notices particular members are never accessed? The real challenge linking has is the moment it hits a condition it can’t resolve with certainty, then all of the dependencies of that code get brought in.

                                                Put another way, instead of looking at what the linker can and can’t do, look at what actually happens. How large is a statically linked hello world program with Qt? Gtk? wxWidgets? Today, it’s probably fair to ask about a statically linked electron program, which won’t strip anything because the compiler can’t see which branches that dynamically loaded HTML or JS are going to use. What would get really interesting is to use a coverage build and measure the fraction of code that actually executes, and I’ll bet with conventional UI toolkits that number is below 10%.

                                                It really looks to me that the size and complexity of code is increasing faster that the linker’s ability to discard the code, which is the real reason all systems today are using dynamic linking. Drew’s points about the costs are legitimate, but we ended up dynamically linking everything because in practice static linking results in a lot of dead code.

                                                1. 2

                                                  Well, printf() is one of those bad APIs that postpone to runtime what could be determined at edit or compile time. But what’s the overhead of printf() in something like musl?

                                                  $ size a.out
                                                     text	   data	    bss	    dec	    hex	filename
                                                    14755	    332	   1628	  16715	   414b	a.out
                                                  

                                                  I think I can afford printf() and its dependencies being statically-linked.

                                                  1. 1

                                                    Can you afford it with UI libraries? Printf is an example of what can happen - it’s not the only case.

                                                    1. 3

                                                      Many GUI programs out there bundle a private copy of QT (or even chrome, via electron). Because they do it as a .so, theydo it without dead code elimination.

                                                      And as we tend towards snaps and flatpacks for packaging open source applications, the practice is spreading through the open source application world.

                                                      So, empirically, it seems like we decided we could afford it. Static linking just makes it cheaper.

                                            2. 1

                                              It’s true that linking to some symbols can have an outsized effect on dead code elimination, stdio being the (in)famous case, but on the whole this is the exception rather than the rule.

                                            1. 13

                                              Dynamic linking is crucial for a proper separation between platform and applications, especially when one or both are proprietary. Would Win32 applications that were written in the 90s still run (more or less) on Windows 10 if they had all statically linked all of the system libraries? I doubt it. And even if they did, would we really want to require applications to be rebuilt, on their developers’ release cycles, before their users could take advantage of improvements in system libraries? I think this concern also applies to complex free-software platforms like GNOME. (And platforms that target a broad user base do need to be complex, because the real world is complex.)

                                              1. 16

                                                I don’t think it’s a binary choice; in the case of Windows, most applications use the system’s system32.dll, user32.dll, and whatnot, but include other libraries like libwhatnot.dll in the application itself. It’s still “dynamically linked”, but ships its own libraries.

                                                This is also something that the Linux version of Unreal Tournament does for example: it uses my system’s libc, but ships with (now-antiquated) versions of sdl.so and such, which is how I’m still able to run a game from 1999 on a modern Linux machine.

                                                I think this kind of “hybrid approach” makes sense, and tries to get the best of both. I think it even makes sense for open source programs that distribute binary releases, especially for programs where it doesn’t really matter if you’re running the latest version (e.g. something like OpenTTD). I think this is also what systems like flatpak and such are doing (although I could be wrong, as I haven’t looked at it much).

                                                1. 8

                                                  My understanding was that the OP was arguing for a binary choice. I think @ddevault’s reply reinforces that. I actually agree with you about the benefits of a hybrid approach: dynamic linking for platform libraries, static linking for non-platform libraries.

                                                  1. 1

                                                    especially for programs where it doesn’t really matter if you’re running the latest version (e.g. something like OpenTTD)

                                                    Looks like you never played the OpenTTD multiplayer, right? :)

                                                    1. 1

                                                      I didn’t even know there is a multiplayer, haha; I actually haven’t played it in years. It was just the first fairly well-known project that came to mind 😅

                                                      1. 1

                                                        So, to clarify - OpenTTD requires the same version on client and the multiplayer server to participate in game. And it’s pretty strict in that, you can’t even patch the game retaining the same version number. Same thing goes to the list of installed NewGRFs (gameplay extensions/content), but at least this one can be semi-automatically downloaded at clientside before joining.

                                                        1. 1

                                                          Yeah, I assumed as much. I think the same applies to most online games. Still, I can keep using the same old version with my friends for 20 years if it’s distributed as described above, because I want to play it on Windows XP for example, or just because I like that version more (and there are many other applications of course, George RR Martin using Word Perfect 6 is a famous example).

                                                  2. 3

                                                    In the case where an ABI boundary exists between usermode libraries, then a lot of the arguments Drew is making here go away. When that occurs 100% of programs are going to need those dynamically linked libraries, so the benefits of code sharing start to become apparent. (It is true though that dynamically resolving functions is going to slow down program loading on that system compared to one where programs invoke syscalls by index and don’t need a dynamic loader.)

                                                    That said, I think statically linking on Windows is going to offer higher compatibility than you’re suggesting. The syscall interface basically is stable, because any Win32 program can invoke it, so when it changes things break. The reason I’m maintaining my own statically linked C library is because doing so allows my code to run anywhere, and allows the code to behave identically regardless of which compiler is used to generate that code. I’m using static linking to improve compatibility.

                                                    One thing to note about Win32 also is to compare the commit usage of processes when running across different versions of the OS. The result is huge disparities, where new OSes use more memory within the process context. Just write a simple program that calls Sleep(INFINITE) and look at its memory usage. The program itself only needs memory for a stack, but it’s common enough to see multiple megabytes that’s added by system DLLs. Those DLLs are initializing state in preparation for function calls that the program will never make, and the amount of that initialization is growing over time.

                                                    1. 1

                                                      In the case where an ABI boundary exists, you definitely want static linking to ensure the ABI is sound. See https://thephd.dev/intmax_t-hell-c++-c .

                                                      1. 2

                                                        The context here is that I work for Microsoft and so did mwcampbell when he wrote that.

                                                        As he mentioned, in order to allow the operating system to be updated independently from applications, there needs to be a compatible ABI somewhere. It could be between kernel and user, or it could be somewhere else, but it needs to be somewhere. When this type of separation exists, we don’t have the luxury to just statically link, since doing so would result in a combined Operating System+Application bundle that can only run one application at a time. The moment one kernel is running two programs and those three things are compiled independently, there needs to be an agreed upon interface.

                                                        That compatible ABI needs to be designed with compatibility in mind. The article you’re linking to is correctly pointing out that intmax_t is not going to result in a stable ABI, and should not be used where ABI stability is required. Unfortunately since its stated purpose is to provide an interface between the C library and the application, and the C library is dynamically linked, this particular thing failed right out of the gate.

                                                        What’s a bit strange with these articles is that when you work in a space that requires ABI stability, it becomes clear that any interface can be made stable by following a few simple principles. Unfortunately a lot of times those principles aren’t followed, and the result is an incompatible interface, followed by suggestions that the result is an inevitable consequence of dynamic linking. It’s not really possible to use any computing environment today that doesn’t have a stable ABI somewhere in order to allow various components to be updated independently. Heck, I’d argue that a web browser is basically a stable ABI, and the ability to update it without updating the entire web indicates that it’s able to provide a compatible interface.

                                                        What this particular discussion is really about is saying that Windows ends up with compatible ABIs at multiple layers, including the syscall interface, as well as system provided usermode libraries. Anyone working on these edges won’t use something like intmax_t.

                                                        1. 1

                                                          I may not entirely agree with you, but I sure appreciate that context and see your point.

                                                    2. 2

                                                      especially when one or both are proprietary

                                                      Proprietary software is bullshit and can be safely disregarded.

                                                      Would Win32 applications that were written in the 90s still run (more or less) on Windows 10 if they had all statically linked all of the system libraries?

                                                      If Win32 had a stable syscall ABI, then yes. Linux has this and ancient Linux binaries still run - but only if they were statically linked.

                                                      would we really want to require applications to be rebuilt, on their developers’ release cycles

                                                      Reminder that the only programs that matter are the ones for which we have access to the source code and can trivially rebuild them ourselves.

                                                      And in any case, this can be turned around to work against you: do we really want applications to stop working because they dynamically linked to library v1, then library v2 ships, and the program breaks because the dev wasn’t around to patch their software? Software which works today, works tomorrow, and works the day after tomorrow is better than software which works today, is more efficient tomorrow, and breaks the day after tomorrow.

                                                      1. 29

                                                        Reminder that the only programs that matter are the ones for which we have access to the source code and can trivially rebuild them ourselves.

                                                        I don’t know, I’ve gotten a lot of mileage out of the baseband code in my phone even though I don’t have access to the source. It’s a security issue, but one whose magnitude is comically smaller than the utility I get out of it. I similarly have gotten a lot of mileage out of many computer games, none of which I have access to the source for. Also, the microcontroller code on my microwave is totally opaque but I count on it every day to make my popcorn.

                                                        If you want to argue “The only programs that respect your freedoms and don’t ultimately lead to the enslavement of their users are the ones for which we have access to the source code”, that’s totally reasonable and correct. By picking hyperbolic statements that are so easily seen to be so, you make yourself a lot more incendiary (and honestly sloppy-looking) than you need to be.

                                                        And maybe coming off as a crank wins you customers, since there’s no such thing as bad press, but don’t be surprised when people point out that you’re being silly.

                                                        1. 1

                                                          I don’t know, I’ve gotten a lot of mileage out of the baseband code in my phone even though I don’t have access to the source. It’s a security issue, but one whose magnitude is comically smaller than the utility I get out of it. I similarly have gotten a lot of mileage out of many computer games, none of which I have access to the source for. Also, the microcontroller code on my microwave is totally opaque but I count on it every day to make my popcorn.

                                                          And this is supposed to be evidence that proprietary programs matter and shouldn’t be disregarded? The context in discussion sites like this is that we can decide to change our programming practices for the programs that we have control over. The defining characteristic of proprietary software is that programmers do not have control, so discussion is irrelevant. Bring the production of baseband code into the public sphere and we can debate whether it should be using dynamic linking (I doubt it even does now).

                                                          1. 1

                                                            whoops I only meant to post one version of this comment…. my b

                                                          2. 1

                                                            I don’t know, I’ve gotten a lot of mileage out of the baseband code in my phone even though I don’t have access to the source. It’s a security issue, but one whose magnitude is comically smaller than the utility I get out of it. I similarly have gotten a lot of mileage out of many computer games, none of which I have access to the source for. Also, the microcontroller code on my microwave is totally opaque but I count on it every day to make my popcorn.

                                                            So you would like to be able to dynamically link a binary with the microcontroller code in your microwave? Come on. If anything these examples reinforce the point that proprietary programs can be disregarded in discussions like this. I don’t think it’s hyperbolic or silly to say so.

                                                          3. 19

                                                            If Win32 had a stable syscall ABI, then yes. Linux has this and ancient Linux binaries still run - but only if they were statically linked.

                                                            Except Windows and everyone solved this at the dynamic linking level, and this goes far beyond just the syscall staples like open/read, and towards the entire ecosystem. Complex applications like games that APis for graphics and sound are far likelier to work on Windows and other platforms with stable whole-ecosystem ABIs. The reality is real-world applications from 1993 are likelier to work on Win32 than they are on Unices.

                                                            That, and Linux (and Plan 9) are the aberration here, not the rule. Everyone else stopped doing this in the 90s if not earlier (SunOS added dynamic linking in the 80s and then as Solaris banned static libc in the early 2000s because of the compat issues it caused). FreeBSD and Mac OS technically allow it, but you’re on your own - when Mac OS changed a syscall or FreeBSD added inode64, the only broken applications were static Go binaries, not things linked against libc.

                                                            That, and some OSes go to more extreme lengths to ban syscalls as static ABI. Windows scrambles syscall numbers every release, OpenBSD forbids non-libc pages from making syscalls, AIX makes you dynamic link to the kernel (because modules can add new syscalls at runtime and get renumbered).

                                                            1. 4

                                                              The reality is real-world applications from 1993 are likelier to work on Win32 than they are on Unices.

                                                              Or the 2000s. Getting Loki games like Alpha Centauri to run now is very hard.

                                                              1. 1

                                                                Complex applications like games that APis for graphics and sound are far likelier to work on Windows and other platforms with stable whole-ecosystem ABIs. The reality is real-world applications from 1993 are likelier to work on Win32 than they are on Unices.

                                                                There are half a dozen articles about WINE running and supporting old Windows programs better than Windows 10.

                                                                Examples:

                                                                “I have a few really old Windows programs from the Windows 95 era that I never ended up replacing. Nowadays, these are really hard to run on Windows 10.”.

                                                                “Windows 10 does not include a Windows XP mode, but you can still use a virtual machine to do it yourself.”

                                                                I specifically remembering there being a shitshow when Windows 10 came out because many applications straight up didn’t work anymore, that runs under Wine.

                                                                Try again.

                                                                1. 8

                                                                  Sure, we can play this game of hear-say, but it’s hard to argue if you have an application from 1993, Windows 10 will almost certainly be likelier to run the binary from 1993 than almost any other OS would be - and it does so with dynamic linking.

                                                                  Not to discredit Wine, they do a lot of great, thankless work. I’m more shocked the claim I’m replying to was made, since it seems like it was ignorant of the actual situation with Windows backwards compatibility to score a few points for their pet theory.

                                                                  1. 2

                                                                    I’m more shocked the claim I’m replying to was made, since it seems like it was ignorant of the actual situation with Windows backwards compatibility to score a few points for their pet theory.

                                                                    I’ve never been too interested in Windows as a platform, what I do know is that a whole pile of people in my social group and the social groups I listen to, who use old Windows programs frequently, were ridiculously frustrated that their programs no longer work. And it became a case of “Windows programs I want to run are more likely to work on WINE than they are on Windows”.

                                                                    Sure, that has since been mitigated, but that doesn’t change the fact that for a time, WINE did run programs better than Windows. I’m deeply hurt by the idea that you think it was made to score points.

                                                                    1. 1

                                                                      I’m deeply hurt by the idea that you think it was made to score points.

                                                                      No, I referred to the parent of my initial comment.

                                                                  2. 5

                                                                    Would Wine have ever worked if all Windows programs were statically linked?

                                                                    1. 5

                                                                      Wine does take advantage of dynamic linking a lot (from subbing in Microsoft versions of a library to being able to sub in a Wine version in the first place)

                                                                      1. 1

                                                                        I think, yes. The more interesting question is, would Wine be easier to write if Windows programs were statically linked. My initial guess is yes, because you can ignore a lot of the system and just sub out the foundations. However, I do know that the Windows team did a lot of really, really abysmal things for the purpose of backwards compatibility, so who knows what kind of monstrosity wouldn’t run on static-Windows Wine simply because of that?

                                                                        We’ll never know.

                                                                        1. 1

                                                                          How would you even write wine if Windows programs were statically linked? As far as I know, Wine essentially implements the system DLLs, and dynamically links them to each exe. Without that, Wine would have to implement the kernel ABI and somehow intercept syscalls from the exe. It can be done, that’s how gvisor works, but that sounds harder to me.

                                                                          1. 1

                                                                            I am very likely wrong (since they didn’t decide to go this route in the first place) but I feel that it might be easier to do that. The Kernel ABI is likely a much smaller surface to cover and you have much, much more data about usage and opportunities to figure out the behaviour of the call. As opposed to a function that’s only called a handful of times, kernel calls are likely called hundreds of times.

                                                                            Of course, this doesn’t account for any programs that {do/rely on} some memory / process / etc. weirdness. Which I gather probably a lot, given on what Chen put down in Old New Thing

                                                                  3. 4

                                                                    ancient Linux binaries still run - but only if they were statically linked

                                                                    Or if you have a copy of the whole environment they ran in.

                                                                    I guess that’s more common in the BSD world — people running FreeBSD 4.x jails on modern kernels.

                                                                    1. -9

                                                                      Proprietary software is bullshit and can be safely disregarded.

                                                                      Ah yes, the words of someone who doesn’t use computers to do anything anyone would consider useful.

                                                                      1. 12

                                                                        I disagree with @ddevault’s position, but can we please not let the discussion degenerate this way? I do think the work he’s doing is useful, even if I don’t agree with his extreme stances.

                                                                        1. -3

                                                                          I don’t give leeway to people who are abusive.

                                                                          1. 21

                                                                            But responding with an obvious falsehood, in such a snarky tone, just causes tensions to rise. Or do you truly believe that nothing @ddevault does with computers is useful?

                                                                            I think a more constructive response would be to point out that @ddevault is very lucky to be in a position where he can do useful work with computers without having to use proprietary software. Most people, and probably even most programmers (looking at the big picture), don’t have that privilege. And even some of us who could work full-time on free software choose not to, because we don’t all believe proprietary software is inherently bad. I count myself in the latter category; I even went looking for a job where I could work exclusively on free software, got an offer, and eventually turned it down because I decided I’m doing more good where I’m at (on the Windows accessibility team at Microsoft). So, I’m happy that @ddevault is able to do the work he loves while using and developing exclusively free software, but I wish he wouldn’t be so black-and-white about it. At the same time, I believe hyperbolic snark isn’t an appropriate response.

                                                                            1. 12

                                                                              Much of my career was spent writing “bullshit” software which can, apparently, be “disregarded”. This probably applies to most of us here. Being so disrespectful and dismissive of people’s entire careers and work is more than just “incorrect” IMHO.

                                                                              I like the word “toxic” for this as it brings down the quality of the entire conversation; it’s toxic in the sense that it spreads. I don’t want to jump to mdszy’s defence here or anything, and I agree with your response, but OTOH … you know, maybe not phrase things in such a toxic way?

                                                                            2. 3

                                                                              If I had to add a tag to those comments I’d use ‘idealist’ and that’s not necessarily bad. What do you find abusive in his comments?

                                                                              1. 2

                                                                                Abuse is about a mixture of effect and intent, and it depends on the scenario and the types of harm that are caused to determine which of those are important.

                                                                                I don’t think ddevault’s comment was abusive, because of the meaning behind it, and because no harm has been caused. I think the meaning of “Proprietary software is bullshit and can be safely disregarded” was, “I can’t interact or talk about proprietary software in a useful way, so I’ll disregard it”. The fact that it was said in an insulting form doesn’t make it a form of abuse, especially in context.

                                                                                In context, software made proprietary, is itself harming people who are unable to pay for it, and in a deep way. It’s also harming the way we interact with computers, and stifling innovation and development severely. I don’t think insulting proprietary software, which is by far the most dominant form of software, and the method of software creation that is supported by the deeply, inherently abusive system known as “capitalism”, that constantly exploits and undermines free software efforts, can be meaningfully called abuse when you understand that context. And I think people who are so attached to working on proprietary software that they get deeply hurt by someone insulting it, should take a good long introspective period, and rethink their attachment to it and why they feel they need to protect that practice.

                                                                                1. 1

                                                                                  Proprietary does not mean that it costs money.

                                                                                  1. 2

                                                                                    Of course not, but monetarily free software that does not provide the source code is worse, because there’s literally no excuse for them not to provide it. They do not gain anything from not providing the source code, but still they choose to lock users into their program, they do not allow for inspection to ensure that there is no personal data being read from the system, or that the system is not altered in harmful ways. They do not allow people to learn from their efforts, or fix bugs in what will soon be an unmaintained trash heap. And they harm historical archival and recovery efforts immensely.

                                                                                    Every example of “Monetarily free but proprietary software” that I can think of, either does very, very dubious things (like I-Orbit’s software, which is now on most malware scanners lists), or is old and unmaintained, and the only reason why people use it is because either they’re locked into it from their prior use, or because it is the only thing that does that task. Those people will experience the rug being pulled from under them after a year or two as it slowly stops working, and might never be able to access those files again. That, is a form of abuse.

                                                                                    1. 0

                                                                                      This is absolutely not as much of a massive societal issue as you make it seem. Perhaps spend your time thinking about more important things.

                                                                                      1. 1

                                                                                        That’s a nice redirect you have there. Flawlessly executed too, I literally would not have noticed it if I did not have intimate experience with the way abusers get you off topic and redirect questions about their own actions towards other people.

                                                                                        Anyway, I’ll bite.

                                                                                        I live with two grown adults, neither of which touch computers except when they absolutely have to, and I have observed the mental strain that they go to because programs they spent decades using, and had a very efficient workflow with, have stopped working. I also know dozens of other people who experience the same thing.

                                                                                        One of them literally starts crying when they have to do graphics work, which is part of their job as an artist, because there’s not enough time in the day for them to learn newer image editors, and because all of the newer options for use that actually do what they need, are ridiculously intimidating, badly laid-out, and work in unexpected ways with no obvious remedy, and conflicting advice from common help-sources. True, this could (and should) be solved by therapy, but it’s foolish to disregard the part that proprietary software has to play in this. Maybe you just don’t live around people whose main job is not “using a computer”?

                                                                                        I do not see what you have invested in proprietary software, such that you feel the need to call someone’s offhand insult against it, “abusive”.

                                                                                        1. 1

                                                                                          Kindly tell me more about how anyone who isn’t neurotypical has been welcomed with open arms into FOSS communities. I’ll wait.

                                                                                          1. 2

                                                                                            I myself am a neuro-atypical and queer software developer. Do you want to talk down to me some more?

                                                                                            Again you are redirecting the question towards a different topic. The topic we were originally talking about is “Is insulting proprietary software abusive”, and now you want to talk about “Queer and Neuro-atypical acceptance in Free Software communities”.

                                                                                            You still haven’t told me how insulting proprietary software is abusive. I’m still very interested in reading your justification for that.

                                                                                            Just because the culture that’s grown around free software (and, to be honest, that free software has grown around) is very, very shitty, doesn’t mean that non-free software is good, or something worthy of protection. The culture around free software is fundamentally one of sharing, that’s literally the core tenet. The culture around proprietary software is worse, since it’s literally only about gate-keeping, that’s the only foundation it has. Free software can be improved by changing the culture. There is nothing to change about proprietary software.

                                                                                            It’s a real shame that many of the more prolific founders of free software were libertarians, but that is still a mistake that we can correct through social, cultural changes and awareness.

                                                                                            Proprietary software is fundamentally an offshoot of Capitalism, and wouldn’t exist without that. It literally only exists under an abusive system, and supports it. The contributions of free software members are preyed upon by capitalist companies for gain, so that they can profit off the backs of those people without giving back.

                                                                                            1. 1

                                                                                              Fun fact: not once did I say that ddevault is abusive by saying proprietary software is shit. He’s just abusive. I’ve witnessed him be abusive to my friends by saying they’re awful people for using non-free software.

                                                                                              Fuck capitalism, fuck ddevault.

                                                                                              1. 1

                                                                                                Fun fact: not once did I say that ddevault is abusive by saying proprietary software is shit. He’s just abusive. I’ve witnessed him be abusive to my friends by saying they’re awful people for using non-free software.

                                                                                                Fuck capitalism, fuck ddevault.

                                                                                                Ah! I didn’t pick up on that, sorry!

                                                                                                1. 1

                                                                                                  I apologize as well.

                                                                                2. -1

                                                                                  Labeling ddevault’s position as abusive is itself absusive, even if you think his position is wrong.

                                                                                  1. 1

                                                                                    I don’t think someone who genuinely believes that someone was being abusive, and calling that out, can themselves be called “abusive”. Abuse is about a mixture of effect and intent, and it depends on the scenario and the types of harm that are caused to determine which of those are important.

                                                                                    I don’t think ddevault’s comment was abusive, because of the meaning behind it, and because no harm has been caused. I think the meaning of “Proprietary software is bullshit and can be safely disregarded” was, “I can’t interact or talk about proprietary software in a useful way, so I’ll disregard it”. The fact that it was said in an insulting form doesn’t make it a form of abuse, especially in context.

                                                                                    In context, software made proprietary, is itself harming people who are unable to pay for it, and in a deep way. It’s also harming the way we interact with computers, and stifling innovation and development severely. I don’t think insulting proprietary software, that is supported by the deeply, inherently abusive system known as “capitalism”, that is by far the most dominant form of software, and the method of software creation that exploits and undermines free software efforts, can be meaningfully called abuse when you understand that context. And I think people who are so attached to working on proprietary software that they get deeply hurt by someone insulting it, should take a good long introspective period, and rethink their attachment to it and why they feel they need to protect that practice.

                                                                          2. 1

                                                                            How many binaries from Windows 95 are useful today? I’m not sure that’s a strong argument.

                                                                            Software that is useful will be maintained.

                                                                            1. 3

                                                                              This is a short-sighted argument. Obscure historic software has its merit, even if the majority of people won’t ever use it.

                                                                          1. 2

                                                                            I also have something like that:

                                                                            https://github.com/jacereda/fsatrace

                                                                            And also started a FUSE filesystem but the above tool was enough for my needs and lost interest:

                                                                            https://github.com/jacereda/traced-fs

                                                                            1. 1

                                                                              I would consider this if there’s a Nix binary cache for musl-compiled packages… Does such a thing exist?

                                                                              1. 2

                                                                                The official nixpkgs has pkgsCross.musl64 I think, not sure if everything is cached though.

                                                                                1. 1

                                                                                  I was completely unaware of Strand, many thanks!

                                                                                1. 5

                                                                                  At least once a week I spend some time trying to build simple things in a mind stretching language. Lately that’s been APL for me, I find APL challenging and fun! I spent several months learning how to write code with Joy ( https://en.wikipedia.org/wiki/Joy_(programming_language) ) and that was equally mind bending.

                                                                                  What else is on the edges? I have a job writing Haskell, and I got paid for decades of Python. What other brain stretching languages should I try?

                                                                                  1. 6

                                                                                    One that’s personally been on my list for too long is miniKanren. There’s a video that showed writing a program most of the way, then putting constraints on the possible output, which generated the rest of the code. It blew my mind and it’s sad I haven’t gotten a chance to dive in yet. Plus Clojure’s core.logic is basically an implementation of miniKanren and has alot of users, so it looks like there’s actual use of it in the “actually gets things done” parts of the software world which is always nice.

                                                                                    1. 4

                                                                                      You might like Strand. It’s a fun language to play around with and the Strand book “Strand: New Concepts for Parallel Programming” is a great read.

                                                                                      1. 2

                                                                                        Agreed, I’m really enjoying this one.

                                                                                      2. 5

                                                                                        I did a tweetstorm on interesting obscure languages I’ve been meaning to try! Check it out here: https://twitter.com/hillelogram/status/1243599545218596864?s=20

                                                                                        1. 3

                                                                                          I’d say something like Unison or Darklang. Solidity.

                                                                                          1. 3

                                                                                            pure has been on my list for a while; I just can’t think of anything specific I want to do with it, and I haven’t been motivated to just work through something like 99 problems.

                                                                                            1. 3

                                                                                              Joy and other concatenative languages have been a pet favourite of mine, and it is fun to play around with it. Here was one of my attempts to cloth postscript on a concatenativeish skin.

                                                                                            1. 1

                                                                                              Could something like this be accomplished in Zig via comptime?

                                                                                              1. 1

                                                                                                In theory, you could do it with any static language if you write a verification, condition generator to integrate with Why3 or Boogie. They do the proving. A language with metaprogramming might need an initial pass that does compile-time evaluation. Metaprogramming and dynamic languages are difficult in general. Worst case, you can use subsets and/or annotations to aid the analyses.

                                                                                                1. 2

                                                                                                  That reminds me of the different approaches to handling declarative dependencies in Nix (in my case that’s mostly Haskell libraries with version bounds):

                                                                                                  • One approach is to have our Nix function (e.g. buildHaskellPackage) implement a constraint solver, which reads in version bounds from each version of each dependency, and picks a mutually-compatible set of dependencies to build.
                                                                                                  • A more practical approach is to just shell-out to an existing solver (cabal in Haskell’s case) and parse its output.

                                                                                                  Whether such build-time analysis is performed “within” the language via macros, or externally as a separate step of the build process, the same solvers can be called and the end result is the same (for checkers like this, there’s also nothing to parse: if a solution is found then we throw it away and carry on to the next step, if not we exit with an error message).

                                                                                                  I used to dislike the thought of performing I/O at compile-time, but I’m seeing more and more compelling use-cases: shelling out to competent solvers is one; “type providers” like F# are another (where types can be generated from some external source of truth, like a database schema; ensuring out-of-date code fails to build). One I’ve used recently was baking data into a binary, where a macro read it from a file (aborting on any error), parsed it, built a datastructure with efficient lookups, and wrote that into the generated AST to be compiled. This reduced the overhead at runtime (this command was called many times), and removed the need for handling parse errors, permission denied, etc.

                                                                                                  1. 2

                                                                                                    Yeah, integration with external tool can help in all kinds of ways. The simplest is static analysis to find code-level issues the compiler can’t find. I really like your idea of baking the data into a binary. It’s like the old idea of precomputing what you can mixed with synthesis of efficient, data structures. That’s pretty awesome.

                                                                                                    Actually, I’ve been collecting, occasionally posting, stuff like that for formal methods. Two examples were letting someone specify a data structure in functional way or modify/improve loops. Then, an external tool does a pass to make an equivalent, high-performance, imperative implementation of either. Dumps it out as code. Loop and data structure examples. Imperative/HOL’s technique, if generalizable, could probably be applied to languages such as Haskell and Rust.

                                                                                              1. 4

                                                                                                The only thing I wish about it, is that the syntax is more C-like and less Rust-like. I also wish it didn’t depend on the Rust runtime, other than that it’s nice, and I’ve been thinking about doing something similar for a while, although syntax-wise more in the C-with-ML direction rather than the Rust direction.

                                                                                                1. 1

                                                                                                  Does it depend on the Rust runtime? The readme states explicitly that it just depends on having a C compiler for the target platform.

                                                                                                  1. 8

                                                                                                    Author should probably use sized types or static asserts for a structure that is supposed to have a concrete size.

                                                                                                      1. 9

                                                                                                        Starting Forth and Thinking Forth.

                                                                                                        1. 3

                                                                                                          Thinking Forth was one of only two books that fundamentally changed how I approach programming, I highly recommend it (and copies can be found on the Internet). The other book was Writing Solid Code (even if it was written by Microsoft).