Threads for ciprian_craciun

  1. 1

    That’s pretty cool. It being FUSE sounds like it might also work on other file and operating systems. Or would there be anything preventing that?

    1. 3

      Theoretically it could work everywhere where FUSE is supported.

      However, there is a limitation that muxfs introduces which in my opinion makes it less usable: it requires stable inode numbers, because it stores the checksums tied to those inode numbers. Thus you won’t be able to use it with any backing file-system that doesn’t provide such stable inodes, which it seems includes all NFS ande network-based file-systems and other FUSE-based ones.

      (Also based on the discussions on HN on the same article, it seems that the author relies quite a bit on the FUSE implementation particularities on OpenBSD, like for example the fact that FUSE is single-threaded on OpenBSD. Thus porting might have to contend with a few issues such as these.)

    1. 5

      Given that the article touches on many non-mainstream browsers, I think a special consideration should have also been given to console browsers like lynx, w3m, and others. I know almost nobody uses one of these to browse the internet these days, but they might be used by some automated tools to ingest your contents for archival or quick preview.

      From my own experience it’s quite hard to get a site to look “good” in all of these, as each have their own quirks. Each renders headings, lists, and other elements in quite different ways. (In my view w3m is more closer to a “readable” output, meanwhile lynx plays a strange game with colors and indentation…)

      For example I’ve found that using <hr/> are almost a requirement to properly separate various sections, especially the body of an article from the rest of the navigation header / footer. (In fact I’ve used two consecutive <hr/>s for this purpose, because the text might include a proper <hr/> on its own.)


      On a related topic, also a note regarding how the page “looks” without any CSS / JS might be useful. (One can simulate this in browser by choosing the View -> Page Style -> No Style option.)

      As with console browsers, I’ve observed that sometimes including some <hr/>s makes things much more readable (Obviously these <hr/> can be given a class and hidden with CSS in a “proper” browser.)

      1. 4

        I know almost nobody uses one of these to browse the internet these days

        I find them essential when on a broken/new machine which doesn’t have X11 set up correctly yet. Or on extremely low-power machines where Firefox is too much of a resource hog. Especially mostly-textual websites should definitely be viewable using just these browsers, as they may contain just the information needed to get a proper browser working.

        1. 4

          I actually was recently showing other members on their team that they will do better markup and CSS if they always test with a TUI browser and/or disabling styles in Fx after doing it myself for a few years now. It will often lead to better SEO too since non-Google crawlers will not be running that JS you wrote.

          Netsurf is still a browser to consider too.

          1. 4

            Er, sort of. There are lots of great reasons to test in a textual browser, but “accessibility” is lower on that list than most people realize. It’s easy for sighted users to visually skip over blocks of content in a TUI or GUI, but the content needs to be semantic for assistive technologies to do the same.

            I consider textual browsers a “sniff test” for accessibility. They’re neither necessary nor sufficient, but they’re a quick and simple test that can expose some issues.

            I do absolutely advocate for testing with CSS disabled; CSS should be a progressive enhancement.

        1. 7

          I’ve mostly re-written this article since the last time it was submitted (the canonical URL changed but a redirect is in place).

          I’ve shifted much of its focus to accessibility. Accessibility guidance tends to be generic rather than specific, and any information more specific or detailed than WCAG is scattered across various places.

          Feedback welcome. I’m always adding more.

          1. 4

            I’ve quickly skimmed through your article, stopping mainly at the sections that interest me, and I would have liked it to be split into a series of more focused articles / pages. Right now it’s hard to see where a section starts and when one begins.

            All-in-all I’ve found quite some good advises in there. Thanks for writing it!

            1. 2

              You may want to add consistent hashing

              1. 2

                I thought really hard about the topic of consistent hashing, and I’ve decided not to include it (at the moment of writing) because it’s not actually a hashing algorithm (or even a class of such algorithms), but instead it’s a particular usage of other hashing algorithms (i.e. more of a use-case).

                In fact, just like consistent hashing is closely related to hashing, so are other topics like content addressing or various load-balancing schemes that rely on hashing (as opposed to load).

                I’ll wait for some more feedback on the topic, and perhaps in the end I’ll add a hint to these topics in a separate section.

                Thanks for the feedback.

              1. 2

                For what the OP calls “shuffling hashes”, there’s at least two use cases with different goals. If you want multiple processes to get the same hash over time and space (a distributed system and/or hashes stored persistently) you want a hash with consistent output, like highway hash. In contrast, if your hash is an implementation detail of a single-process in-memory hash map, say, you don’t care if its representation changes over time. https://github.com/tkaitchuck/aHash/blob/master/compare/readme.md has a good discussion of this.

                https://en.wikipedia.org/wiki/Metaphone is a family of soundex alternatives. https://en.wikipedia.org/wiki/Category:Phonetic_algorithms lists a few more.

                1. 2

                  If you want multiple processes to get the same hash over time and space (a distributed system and/or hashes stored persistently) you want a hash with consistent output, like highway hash. In contrast, if your hash is an implementation detail of a single-process in-memory hash map, say, you don’t care if its representation changes over time.

                  Indeed this is correct, and I’ll have to think about how to express this in the context of the article, without getting bogged down into implementation details… (Because all algorithms linked there, including the keyed ones, are deterministic. It is just a matter of which / how you use them.)

                  Thanks for the feedback.

                1. 7

                  Whine: would be cool if stuff could have names that are descriptive of what they do, rather than have an entire aesthetic. pick is great, cuz it’s about picking stuff!

                  But this looks very nice. It feels like it really balances nice aesthetic results with not being filled with emoji everywhere. I’m going to see if I can integrate this into my workflows

                  1. 4

                    It feels like it really balances nice aesthetic results with not being filled with emoji everywhere.

                    I second that! I was afraid that, inline with the recent trend of “emojification” that I’ve seen in many tools, this would have been full of those, but it seems I was wrong.

                    The only complaint I have about the aesthetic is that it’s a bit too padded… I don’t know how it will integrate visually with outputs of other tools and echo’s when integrated into scripts.

                  1. 1

                    I think I’m going to try this. I’m helping out a scientist with some workflow automation.

                    1. 1

                      If you have any questions, just let me know and I’m open to help integrating / using it (free of charge, especially given it’s about science); you could:

                      BTW, I already have used z-run with my own https://github.com/cipriancraciun/covid19-datasets “data science project”.

                      1. 1

                        Will do. Looking forward to giving it a shot.

                    1. 1

                      I just wanted to make a small comparison with just: although the readme says it’s not a build system, I consider it closer to make than a general-purpose “command runner”.

                      So while both z-run and just provide a way to write small code snippets in the same file and then easily execute them, I would say z-run set’s apart from just by being more generic:

                      • just has recipes that can depend upon each-other, and multiple recipes can be invoked in the same CLI invocation; on the other hand z-run doesn’t have support for this, it basically provides a vay to just call another “scriptlet” (as I call them), and it’s the job of the developer to properly chain dependencies (and detect if they need to be executed or not); (just see how positional arguments are handled in just and you can see they are a special case in just;)

                      • just has defined a kind of template language that preprocesses the recipe body, especially for argument replacement; z-run doesn’t touch the body of the scriptlet; (on the other hand z-run does support templating, one example is the built-in Go text/template based one, but it has to be explicitly called by the user;)

                      • z-run has the built-in possibility of running scriptlets via ssh on a remote host (allowing those scriptlets to call other scriptlets, but also remotely;)

                      • z-run has built-in support, and is optimized for, generating the scriptlet sources dymanically (especially for having a very explicit fzf-based menu of potential options);

                      • z-run‘s UI (if one can call it like that) is by leveraging a fzf-based workflow: it presents you with all the options (possibly organized into menus), and you don’t need auto-completion; (here is where dynamic source generation comes into play: one writes generic scriptlets that accept various arguments, just like any other script, but then you take all the ways in which they can be called, which is usually a small list of options, and generate scriptlets that just call the generic one;)

                      1. 1

                        Yet another program that re-implements shell scrips.

                        If I were to use one of these kinds of tools I’d probably use Just, but I’ve never seen a reason to.

                        1. 3

                          On the contrary, z-run, just like just (which I’ve looked at) don’t “re-implement shell scripts”, instead they build upon shell scripts and cover some corner use cases that shell scripting doesn’t solve easily. For example:

                          • (in case of just) it is dependencies between shell scripts;

                          • (both z-run and just) its easily mixing multiple scripting languages inside the same source file; (you can get this with here-documents in plain shell but it doesn’t work for complex pipelines nor does it play nicely when you also need stdin;)

                          • (both z-run and just) modularity – it’s quite hard to implement modular shell scripts; functions and sourcing are two solutions, but given the shell’s dynamic scoping when it comes to (environment) variables (unless one uses local) you quickly get into trouble; (also some “sanity” flags like set -e don’t apply to functions automatically;) thus my own pattern is having one large script file with case "${command}" in ( name-that-otherwise-would-be-a-function-name ) ... ;; esac, and instead of calling functions I call "${0}" some-sub-command ..., which makes sure the “function” doesn’t taint the environment of the calling script;

                          • (in case of z-run) remote execution via SSH – just the simple ssh user@host rm '/some path that might contain a space' breaks due to the lack of extra quoting;

                          • (in case of z-run and make) dynamic generation of scripts (both as code and as source); in shell we might have source <( some command ), but at least with z-run it’s easier, and the some command output is cached;

                          Than being said, I’m a long time Linux user, writing a lot of shell scripts (bash) both personally and professionally; thus z-run is not something I’ve came up in my first year of using Linux. For a long time I’ve actually used shell scripts like these ones where I’ve tried to modularize them; however that is much more tedious than using z-run, at least for development and operations.

                        1. 1

                          From what I’ve understood the “lockdown mode” is in practical terms nothing more than a “more sensible (security-wise) mode”, because it just seems to:

                          • disable site-previews – I never understood the actual need for these; (*)
                          • disable opening of attachments except images – something many other software should implement (especially email clients);
                          • disable JIT for JavaScript – perhaps no JS would be better, but that would break most of the internet nowdays; (no more random blogs that require JS…)
                          • disabling some behind-the-scene actions like MDM or connecting via cable when not unlocked; (*)
                          • no new contacts for Apple’s own software;

                          In fact, I don’t understand why the items I’ve marked with (*) aren’t the default…

                          Going further, perhaps there could be a few security modes:

                          • “trust me I’m an expert and certainly I won’t be powned” – i.e. the current standard mode;
                          • the “new standard mode” whith some improved security, focusing on people that don’t have an IT background, which should include at least the items marked with (*), perhaps coupled with a built-in “safe DNS” (that filters out some malware);
                          • the “secure mode” – what they call the new lockdown mode;
                          • the “actual lockdown mode” – that should limit even more things, like for example disable access to camera, microphone, GPS, bluetooth, allow internet connection only through a designated VPN (and thus WiFi and data can only be used to service that VPN), disable applications installation, etc.; (the camera, microphone, etc. could be enabled on demand by explicit action in an iOS generated dialogue, and that only for limited time;)
                          1. 3

                            This article describes exactly the kind of questions I’ve asked myself a few weeks ago when I tried to design a backup system that would survive even a catastrophic event.

                            Just as the author of the article, I couldn’t find a proper answer that solves all these issues:

                            • it is secure in the cryptographic sense – thus protected with a strong “key” (be it asymmetric or symmetric);
                            • it can be easily recovered with nothing more than the “secured” backup files (on physical media) and something I “know” (which should be the source for the “key”);
                            • (plus, to make it more realistic, add to all these also the requirement to be recovered by non-technical relatives in case I’m not around anymore; and just for fun, let’s imagine I don’t want to use physical media, but instead some cloud service;)

                            However, I think the author actually meant to touch on a different subject, as seen from his conclusion:

                            This is where we reach the limits of the “Code Is Law” movement.

                            In the boring analogue world – I am pretty sure that I’d be able to convince a human that I am who I say I am. And, thus, get access to my accounts. I may have to go to court to force a company to give me access back, but it is possible.

                            1. 1

                              I think the article is trying to prevent Dataloss where you simply can’t plan for. How in the world do you reliably plan for the fact that a nuke hit your home. Because a thunderstrike won’t destroy your house like that, not if you have a normal lightning conductor. My yubikey won’t get destroyed by water or a car rolling over it. It burning down with the house would mean I dind’t have it with me or didn’t place a backup somewhere else. If you need your data somewhere else: Encrypt them and store them at services like backblaze (key in your head/bank/..), far off from your home.

                              But ultimately this looks one step below “where are your backups and IDs if the death star showed up tomorrow”.

                              Yeah I guess I witched it with this comment. It’s probably more likely that google will (again) lock you out of your account for no apparent reason.

                            1. 7

                              My only concern with using anything except letters / numbers and underscore / hyphen in tool names is how badly it will interact with other tooling that make assumptions about what a “proper” executable name should be. For example I would expect some build tools (although perhaps not make or ninja, but others) to assume that a comma is a separator and treat that name as a list, especially when combined with string splicing and formatting.

                              My own approach is to prefix all my tools with x- (especially those that should work under Xorg) or z- (those that are more elaborated), or suffixing them with -w (from “wrapper”) for those that just provide some simplifications over existing tools. Meanwhile for bash aliases or functions I use _ as a prefix (after I clear all other aliases).

                              Perhaps either the FreeDesktop organization (the one in charge of XDG standards), or any other organization recognized in the Linux / BSD ecosystem, should just come with some guidelines about “private namespaces” with regard to tool naming, just like we used to have X-Whatever in the HTTP world.

                              1. 1

                                I have a strong feeling that with every new web “improvement” we are gradually re-implementing what was once possible with Flash (or Java applets)… And I hated sites that used Flash (especially since Flash always had issues on Linux), but at least the mess was contained in one place and the rest of the web document was usually still readable…

                                Thus I’m afraid that with this new “feature” it will become yet another feature that would break old browsers or will push even more web sites (thus not web apps) into heavily relying on JavaScript just for some eye-candy.

                                (And I’m not even touching on the battery usage issue of all these animations.)


                                I appreciate – and I’m impressed by – what is possible today with plain HTML+CSS (thus without JavaScript), but perhaps we need a few years to fix the rough edges and make sure that these improvements actually permeate all the major browser engines (especially Firefox and WebKit).

                                Although I think we are once again in a “best viewed with monopoly-browser-the-day” era, where either one gives up and only tests their sites with the latest Chrome (the rest be damned), or having a hard time figuring out which parts of CSS are actually portable to other engines / versions…

                                (For example I don’t think many (1 in a thousand or few?) web-developers even think about how their web-site looks in browsers such as NetSurf or Lynx / w3m…)

                                1. 4

                                  Fully statically linked Linux binaries are possible. PyOxidizer supports it. But you can’t load compiled extension modules (.so files) from static binaries. Since I was initially going for a drop-in replacement for “python, I didn’t want to sacrifice ecosystem compatibility.

                                  The glibc based builds I believe conform to Python’s manylinux2014 platform level and are about as portable as you can get before going fully static. https://pyoxidizer.readthedocs.io/en/latest/pyoxy_installing.html#linux

                                  1. 1

                                    You could perhaps provide two kinds of builds, at least on Linux, one dynamic as the current one, and one static for those that need maximum portability but are OK with the limitation of running only Python native code.

                                    All in all, great work!

                                  1. 4

                                    I’m a little confused by what it does. It gives a Rust wrapper library for embedding CPython and also gives a single static binary for distributing CPython (wrapped in it’s Rust library). So the primary benefits are easier interop with Rust programs and easier distribution because it’s a single binary to get “CPython”.

                                    Do I have that right?

                                    1. 3

                                      This is approximately my understanding. The related PyOxidizer project seems to be like PyInstaller except that its built on Rust infrastructure and has some other tradeoffs (in-memory loading of packages vs. unpacking to temp directory?). I think I could use another blog post, etc., to better explain when/why I’d reach for this toolset.

                                      1. 2

                                        In my understanding PyOxidizer is an “umbrella” project featuring many of what you’ve described, plus probably much more.

                                        However, strictly speaking of pyoxy, what the article describes it seems to serve the following use-case:

                                        • having a self-contained Python run-time (CPython 3.9 at the moment it seems); self-contained in the sense that you don’t need anything else except that executable (and your code) to run it; (at the moment not being statically linked, you also need glibc on Linux, but this is not a major issue for a first release;)
                                        • that self-contained executable can be relocated anywhere on your file-system, thus deploying a Python-based application in an uncontrolled environment becomes easier;
                                        • being a single file, it’s also lighter on the file-system, especially at starup, as there aren’t hundreds of small files to be found and read off disk;

                                        Basically if before I wasn’t too comfortable writing operations scripts in Python, as opposed to bash, because I never knew which turn the Python on my distribution would take, now I can have the pyoxy stored alongside my scripts, and be sure they’ll keep on running as long as I need them, without needlessly breaking due to Python upgrades.

                                        Besides having the Python runtime (with pyoxy run-python), it also allows the user to run pyoxy run-yaml, where the yaml stems from the fact that the user can provide a custom YAML file to instruct the interpreter how it should be set-up. For example I usually use #!/usr/bin/env -S python2.7 -u -O -O -B -E -S -s -R in my scripts, and this always leaves a mess in ps and htop; with the new feature I would use something like #!/usr/bin/env pyoxy run-yaml (see the article), and the process tree would be much cleaner, and also I would have more control over the interpreter setup.

                                      1. 2

                                        As a pyenv fan, a single binary multi platform python distribution seems awesome. Is it statically linked?

                                        Speaking of pyenv, would be cool if the python “payload” could be feed separately to the pyoxy binary. Then, if I wanted multiple python versions (historically, a pain in the ass to get on Linux and Mac), I could have one pyoxy binary and multiple python “bundles”.

                                        I guess I could also just get pyoxys with different python versions embedded, but that feels less “neat” for some reason?

                                        Anyway, cool stuff.

                                        1. 3

                                          It seems this use-case (having pyoxy as separate of the user’s bundled Python code) you are describing is on the roadmap.

                                          However even as it is right now I think it is able to run a zipapp (i.e. a Zip file that contains a __main__.py file in the root of the archive): pyoxy run-python ./app.zip

                                          Regarding the static linking, the Linux binary is not statically linked, however it needs nothing more than Linux’s glibc, thus you can easily copy-paste it between any “recent” Linux distribution without issues:

                                          >> ldd /tmp/pyoxy
                                                  linux-vdso.so.1 (0x00007ffc54481000)
                                                  libdl.so.2 => /lib64/libdl.so.2 (0x00007f68e1177000)
                                                  libm.so.6 => /lib64/libm.so.6 (0x00007f68e108f000)
                                                  libpthread.so.0 => /lib64/libpthread.so.0 (0x00007f68e108a000)
                                                  librt.so.1 => /lib64/librt.so.1 (0x00007f68e1083000)
                                                  libutil.so.1 => /lib64/libutil.so.1 (0x00007f68e107e000)
                                                  libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x00007f68e105d000)
                                                  libc.so.6 => /lib64/libc.so.6 (0x00007f68e0e2e000)
                                                  /lib64/ld-linux-x86-64.so.2 (0x00007f68e721f000)
                                          

                                          I think it is possible to have it statically linked (at least on Linux), but given the above I think it’s enough for a first release. :)

                                        1. 15

                                          In my experience the biggest reason I would not use SQLite on the server is its poor concurrency. Even if you have a single-process multi-threaded server and take advantage of SQLite’s unlock notification functionality, PostgreSQL will absolutely murder it on any sort of concurrent write-heavy’ish workload. Also, SQLite uses a really counter-intuitive locking sequence (I don’t remember the details off the top of my head but can dig it up) so you often have to use BEGIN IMMEDIATE to avoid being drowned in deadlocks.

                                          1. 18

                                            If you’ve been reading the recent wave of SQLite-on-the-server posts from a couple companies, the thing they have in common is low write loads.

                                            Which is surprisingly common in more service-y setups — a single giant codebase, sure, SQLite isn’t the right fit, but once you’re doing some more focused services I think it would be rare not to have at least a few that are write-light or even effectively read-only. I’ve been working through this myself lately and starting to come around to the idea of using SQLite for those cases just because of the reduced number of moving parts to worry about.

                                            1. 4

                                              I think most (but not all) write-intensive SQL use-cases are for metrics or analytical tasks. (Because I’m assuming if one needs some persistent, but not critically persistent, state such as sessions, one would use some better suited system like Redis or any other generic KV store.)

                                              In such scenarios I think nothing can beat a purposely built system like ClickHouse in both terms of raw IO, concurrency and throughput.

                                              Thus, setting these aside, I don’t think write concurrency plays a huge role until the application becomes very (as in viral) successful.

                                              1. 5

                                                I don’t really think this is true. CRUD in general is often write heavy. It just depends on the details of the domain. Keeping state of an MMORPG? Very much write and read heavy.

                                                1. 1

                                                  Keeping state of an MMORPG? Very much write and read heavy.

                                                  Even with classical SQL databases like PostgreSQL or MySQL, write heavy applications tend to be programmed in such a manner that lock contention is reduced as much as possible; there might be multiple writers, but the code is designed so that they don’t touch the same rows.

                                                  Thus at that point one doesn’t actually use the DB for its SQL capabilities, but instead more like a document or KV store; therefore perhaps a proper KV store might actually be the best solution…

                                                  1. 2

                                                    Why isn’t a SQL database a “proper” KV store? How exactly do you think SQL databases store rows on disk? It’s a KV store.

                                                    If 99% of your write workload is non-conflicting, why does that imply you should absolutely fuck yourself for the remaining 1% by using a “proper” KV store with inferior transaction support and inferior query ergonomics? Or worse, eventual consistency?

                                                    1. 1

                                                      Why isn’t a SQL database a “proper” KV store? How exactly do you think SQL databases store rows on disk? It’s a KV store.

                                                      Indeed a relational database can be seen as a KV store, where the key is the primary key and the value is the whole row, but in certain cases a plain-simple KV store (like LMDB that does include transactions) is perhaps more than enough (and simpler)…

                                                      1. 3

                                                        Why is it simpler? If I literally never need to look up anything in my entire application by anything other than the primary key, maybe. But the instant I want to search for a value by some secondary key, I’m stuck manually implementing secondary indexes, and maintaining those indexes correctly as values are added, removed, and changed. In SQLite I type CREATE INDEX and I’m done. That is far simpler.

                                                    2. 2

                                                      Thus at that point one doesn’t actually use the DB for its SQL capabilities, but instead more like a document or KV store; therefore perhaps a proper KV store might actually be the best solution…

                                                      This is a strange statement. Avoiding lock contention is a fact of life with all databases. How does it somehow make it unnatural for SQL?

                                                2. 1

                                                  How write-heavy are we talking? And are you talking into account SQLite’s WAL journalling mode?

                                                1. 5

                                                  I’ve read through the whole article, and all-in-all I think such an OS might work for “consumer devices” and even for single-purpose servers. However I doubt it will work nicely with Linux power-users. I don’t think the article describes anything too revolutionary that we haven’t seen in live-CD’s or in appliances like VyOS for example; it just tries to put the existing pieces together in a single solution.

                                                  However, I do see some major problems with this proposal:

                                                  • for once, it shoves even more systemd lock-in – I like systemd for service and process management, however I think it has started (many years ago) to overstep its purpose… (also it seems a lot of the systemd tooling is already meant to support all this, but I don’t want to say “conspiracy” just yet…) :)
                                                  • it goes crazy with GPT partitions – https://0pointer.net/blog/images/partitions.svg – the simplest system has 10 GPT partitions; want a “system extension”? here are 3 more GPT partitions! I don’t have much love for file-system-images, LVM, or BTRFS/ZFS sub-volumes, however going the GPT partitions way is perhaps not a solution;

                                                  But, setting systemd mania aside, I think something good can come out of this:

                                                  • Linux distributions might become to resemble closer the BSD distributions – one coherent release with some interim patches; (to this day, I feel a great dread when I update and Linux distribution, let alone upgrading it to the next minor version;)
                                                  • we might get closer to repeatable builds – no more crazy bash scripts part of the RPM / DEB install that perhaps work, perhaps not… (why do package install scripts still insist on source /etc/profile or source /etc/bash.bashrc?)
                                                  1. 2

                                                    I’m not sure why he even defined that partition layout as gpt. I can’t see anything on the article that actually requires it. Lvm or just going btrfs for everything should work just as well. Even better, btrfs could handle the old/new systems as snapshots, saving lots of space. (And allowing more than one “previous” version) The only be problem I see is that there would have to be a good way to actually force-erase the data on factory reset.

                                                    Re. power users, there’s already Fedora Silverlight which behaves pretty close (with a bit different implementation) and there are power users who like it quite a bit. Not sure if will appeal to everyone, but it’s not universally hated at least.

                                                    And yeah… As much as I like a lot of things that come out of systemd, I wish he sometimes stopped at “this is a cool interface I came up with, can we get an existing package to implement it?”

                                                    1. 2

                                                      I’m not sure why he even defined that partition layout as gpt. I can’t see anything on the article that actually requires it.

                                                      There is a practical aspect of using GPT as opposed to LVM or similar, namely it is standard and simple to use / implement.

                                                      You can find in that article various mentions to booting the OS either on bare-metal (which at the moment means either MBR, or actually GPT because UEFI doesn’t support LVM), VM (which is similar to bare-metal in this requirement), and containers (via systemd-nspawn) where GPT seems to be the simplest approach.

                                                      Even better, btrfs could handle the old/new systems as snapshots, saving lots of space.

                                                      Given that the OS image is read-only, the best candidate seems to be SquashFS or EROFS, which both (?) support compression, something that none (?) of the other mainstream file-systems support.

                                                      As with BTRFS, I’ve got bitten once by ReiserFS, I’ve seen others bitten by XFS, thus I’m 100% in the “no magic in the file-systems” camp (i.e. Ext4 for the moment)…

                                                      1. 1

                                                        actually GPT because UEFI doesn’t support LVM

                                                        I’m not sure that matters. With uefi you need the EFI partition anyway, then the kernel takes over and knows what to do with lvm/btrfs. (Why would uefi care?) He already wants btrfs for homes, so we’re not avoiding it.

                                                        Same for VMs. Nspawn itself / machined was written with btrfs in mind so much that some functionality was only later added to support other file systems https://github.com/systemd/systemd/commit/9a50e3caab82f8406ecfac6048ac8e2ce98b0ab8

                                                        the best candidate seems to be SquashFS or EROFS, which both (?) support compression

                                                        Btrfs supports compression. But I mentioned snapshots for the a/b systems, because there’s going to be lots of overlap - if we use images for updates, that means 99% won’t change during the update and it can be deduplicated, giving you ~50% compression before you even start compressing.

                                                    2. 2

                                                      (also it seems a lot of the systemd tooling is already meant to support all this, but I don’t want to say “conspiracy” just yet…) :)

                                                      Also coincidentally, it seems there are already two other projects of Lennart that cater to the ideas described in the article:

                                                      1. 2

                                                        Linux distributions might become to resemble closer the BSD distributions

                                                        My thought while reading this was “sounds a lot like how illumos distrutions are used in data centers”

                                                      1. 19

                                                        SQLite is cool, but giving up the great type and integrity features of Postgres just to avoid running one more process seems like a bad trade-off for most of my applications.

                                                        1. 13

                                                          One thing I have learned recently is that SQLite has CREATE TABLE ... STRICT for type checking, because I felt the same pain moving from Postgres for a small CLI application. Could you elaborate on what integrity means here?

                                                          More on STRICT here: https://www.sqlite.org/stricttables.html

                                                          1. 6

                                                            In PostgreSQL one can not only have foreign keys and basic check constraints (all present in one form or another in SQLite), but one can even define his own types (called “domains”) with complex structures and checks. See https://www.postgresql.org/docs/current/sql-createdomain.html

                                                            1. 2

                                                              I haven’t tried this for a very long time but I seem to recall that SQLite provides arbitrary triggers that can run to validate inputs. People were using this to enforce types before STRICT came along and it should allow enforcing any criteria that you can express in a Turing-complete language with access to the input data.

                                                              1. 9

                                                                Triggers might be functionally equivalent, but with PostgreSQL custom types (i.e. domains) not only is it more easy and practical to use, but it can also be safer because the constraints are applied everywhere that type is used, and the developer isn’t required to make sure he has updated the constraints everywhere. (Kind of like garbage collection vs manual memory management; they both work, both have their issues, but the former might lead to fewer memory allocation issues.)

                                                            2. 1

                                                              Oooh, that’s new. I’ll have to use that.

                                                          1. 5

                                                            The page’s author has confirmed in an email reply that it was indeed a joke.

                                                            And apparently I was fooled (fool?) enough to write him an email, letting him know that he made a good argument about SPA’s, although “not the one he intended”. :) – The joke was on me.


                                                            But seriously now, the main reason I was tricked by such a page is because I mainly browse the internet with Firefox and uBlock Origin disabling all JavaScript, and thus this is manner in which many “modern” sites make their first impression on me: most of the times “JavaScript is required to view this site”, some looping the spinning loader of death, and in some circumstances even a blank page.

                                                            Usually I just give up, but given I was interested in this topic I’ve opened it in a “pure untainted” Firefox profile, and to my surprise, to the same outcome.

                                                            Perhaps the author wrote that page in relation to the recently released Changelog JS Party podcast Were SPAs a big mistake?.