1.  

    I would consider this if there’s a Nix binary cache for musl-compiled packages… Does such a thing exist?

    1.  

      Similar to the way people tried to use Objective-C as a pure OOPL, which is significantly slower than even a half-decent Smalltalk, except less expressive, less interactive and more crashy

      An Objective-C program written in a purely Smalltalk style will be slower than the equivalent written in something like Anamorphic Smalltalk (though still noticeably faster than something like Squeak), but that isn’t what happens. Even programs that are mostly high-level OO things written in Objective-C are using classes that wrap carefully optimised C/C++ libraries. A typical Mac application spends the vast majority of its time in things like the font and graphics rendering engines, which are C.

      I’ve written about this before, but I still believe that the thing that killed Smalltalk was the lack of a good way of interworking with non-Smalltalk code. A lot of core bits of Smalltalk were built around the image model, but this did not work with stateful non-Smalltalk libraries. Smalltalk never had a good abstract machine for reconciling the Smalltalk image view of the world with the C process view of the world and supporting both together. With Etoile, we tried to build an object persistence model that gave the advantages of the Smalltalk image model, but played nicely with C. If Smalltalk had gone in that direction in the early ’90s, it might still be a big player today.

      1.  

        I am an “advanced” Linux user and I used to run highly customized environments (Arch, etc). But I too have come to prefer bog standard machines that “just work” like Ubuntu and Pop OS.

        Yes, I’ve had the same experience. I used to run Arch everywhere, but I stopped because I just didn’t have time. Take printing, for example - I could print just as well on Arch as in e.g. Ubuntu, but on Arch I would have to spend two hours reading the wiki. The fact that every little thing was like that was the problem.

        That isn’t a knock on Arch, to be clear; I still have tremendous respect for Arch and I learned a ton from running it. It’s just not for me anymore, and that’s okay! Arch has no aims to be popular, after all ;)

        1.  

          it pioneered the idea of of a language virtual machine

          It did this totally by accident. The Xerox Alto ran bytecode. The instruction fetch unit grabbed a byte and then ran the set of microcode instructions in the look-up table associated with that byte. When you implemented a language for the Alto, you provided a microcode table and wrote a compiler that targeted the bytecode that you’d created. Smalltalk and Algol both did this. Implementations of Smalltalk on other systems initially just wrote emulators for the Alto’s bytecode (and Squeak / Pharo are largely still doing this).

          The Alto’s microcode engine was designed to do hardware-software co-design on ISAs. The idea was to be able to prototype ISAs and then later build CPUs that were optimised for the set of micro-ops that compilers wanted to generate. This didn’t happen and the Smalltalk VM as a portable construct came about by accident instead.

          1.  

            Not the main topic but I just discovered libtls-bearssl and it looks super cool! It’s not easy to distribute the original libtls since it doesn’t really work with OpenSSL, and using the raw OpenSSL API is pure madness.

            1.  

              Oh well, I hope they don’t take the andrewchambers part next.

              1.  

                You have made significant development. I am sure it will also help the Janet community. Thank you!

                1.  

                  Me and my colleagues do this with their Thinkpads as well. Lenovo provides a lot of firmwares to the Linux Vendor Firmware Service: https://fwupd.org/lvfs/vendors/

                  1.  

                    Facebook also recently introduced Hermes for React Native: https://reactnative.dev/docs/hermes

                    1.  

                      This isn’t a comment on the skills of the developers; it’s a critique on the quality and appropriateness of the tooling.

                        1.  

                          Interesting. Thank you :) .

                          1.  

                            I don’t believe a word of it. They said the same thing two years ago, and still nothing.

                            1.  

                              I did the same thing on my T480s (also running Linux/UEFI) yesterday without issues, so it’s most likely a more complicated problem than “only Windows is supported”.

                              1.  

                                This often takes more time than just building the stuff you need manually and linking to system libraries.

                                In the medium term I want to make it easy for someone to get access to a remote build on extremely powerful build machine, currently google is offering 96 cores for cheap at spot prices. These could potentially help for such situations.

                                For me the most expensive hermes package (gcc) builds in about 4 minutes on my desktop. It is definitely an annoyance at times I want to solve.

                                I also want to setup a way to export hermes packages as appimages that can work at any path.

                                1.  

                                  Thanks! Yeah, that seems like a fair comparison. The idea for that stemmed from dissatisfaction with how typical Linux distributions split up source packages into several binary packages (if they even do that at all). With this approach, you select the contents based on whatever criteria you want. Anything that doesn’t get selected doesn’t even get built. Due to the use of static linking, you don’t really have to worry about runtime dependencies. This gives you a lot of control depending on your use case. For example, on my VPS, I use something like

                                  fs = {
                                  	-- I need development files from these libraries to rebuild the kernel
                                  	{'linux-headers', 'musl', 'ncurses', 'elftoolchain', 'libressl', 'zlib'},
                                  	-- I want the st terminfo file, but I don't need st itself
                                  	{'st', include={'^share/terminfo/'}},
                                  	{
                                  		sets.core, sets.extra,
                                  		'acme-client', 'dnssec-rr', 'make', 'nginx', 'nsd', 'pounce',
                                  		exclude={'^include/', 'lib/.*%.a$'},
                                  	},
                                  }
                                  

                                  On my desktop, I use fs = {exclude={}}, which builds every package, excluding nothing.

                                  I’m not using anything like crunchgen, so everything carries a copy of everything it links to. However, due to the use of lightweight packages, most binaries are really small anyway. Only a few packages such as mpv or mupdf which link in a lot of libraries have huge binaries (and by huge I still mean < 10M).

                                  Yes, I’m a big fan of NetSurf. It’s quite a capable browser considering their resources. Unfortunately, more and more sites require the monstrosity that is the modern web browser, so I installed firefox via pkgsrc for those.

                                  1.  

                                    Ironically, I installed BIOS and Intel ME updates from Lenovo this morning using fwupdmgr update, something I’ve done many times before on my T480s.

                                    Except this time around, it wiped everything except the preinstalled ‘Windows Boot Manager’ entry from my UEFI Boot Order List, which stopped me rebooting after the firmware update completed until I fished out a USB drive with an Arch ISO so I could re-run grub-install and restore the entry.

                                    To me, this means they simply didn’t test the update with Linux/UEFI systems, I’ll give them the benefit of the doubt and assume they did check BIOS boot, given it’s still more common.

                                    I hope they sort out this sort of issue as a part of this ‘certification’ process!

                                    1.  

                                      I was wondering about this as well. Both Nix and Hermes advertise installation in addition to a system package manager. This especially comes in handy if you’re on a system where you don’t have root access, but then you can’t create a store at the standard location and thus have to build everything from source. This often takes more time than just building the stuff you need manually and linking to system libraries.

                                      I suppose absolute paths (usually into /usr/lib, /usr/share and so on) are very common. I believe AppImages enforce binary-relative paths, which might work here as well, but would mean lots of extra work with packaging. Detecting absolute paths is easy, but patching them out is not.

                                      1.  

                                        LibreSSL was awesome at the beginning when they were 100% compatible, but on the other hand it sucks that the library name is the same and you can’t use them both.

                                        I recently ran into this. Big C++ project, you have 2 dependencies, one with a hard dep on OpenSSL. Good luck. We were quite happy with VoidLinux and LibreSSL but now it’s all on CentOS (for this project), mostly because of the SSL thing.

                                        No, I still have nothing against LibreSSL if it’s some hobbyist FLOSS project. But if I dread working on something because I spent days needlessly fixing stuff and I can’t simply change it because I’m just a cog in the machine who doesn’t decide to change everything.. pass. Actually I probably also wouldn’t waste my free time on that.

                                        1.  

                                          then I hope you’re not using Java.

                                          It all depends on the language you’re using. I’ve used some where 80 is completely fine. And sometimes you kinda need 100 to not get an unreadable mess.