1. 18

    I actually wound up switching off i3 (well, sway, but they’re basically the same) because I kept getting things into weird situations where I didn’t understand how the tiling works. Containers with only one child, that sort of thing.

    river, my current wm, has an interesting model: the layout management is done in an entirely separate process that communicates over an IPC mechanism. river sends it a list of windows, and the layout daemon responds with where to put them.

    Also, since you brought it up: sway is almost entirely compatible with i3. The biggest missing feature is layout save/restore. But it can do one thing i3 can’t do, and that’s rearranging windows by dragging them.

    1. 26

      That’s pretty much why I wrote river. I was using sway beforehand as well but grew increasingly frustrated with how much implicit state i3-style window management required me to keep in my head and how unpredictable that state makes managing windows if your mental model/memory of the tree isn’t accurate.

      1. 19

        link to the project: https://github.com/ifreund/river

        Looks interesting!

      2. 6

        I’m in the same boat (pre-switch). I use sway but, after many years, still don’t really understand how I sometimes end up with single child (sometimes multi generational) containers.

        My personal ideal was spectrwm, which simply had a single primary window and then, to the right, an infinitely subdividing tower of smaller windows which could be swapped in. I briefly toyed with the idea of writing a wayland spectrwm clone.

        1. 7

          That sounds exactly like the default layout of dwm, awesomewm, xmonad, and river. If you’re looking for that kind of dynamic tiling on wayland feel free to give river a try!

          1. 4

            I will! I had some trouble compiling it last time I tried. But I will return to it.

            1. 4

              Feel free to stop by #river on irc.libera.chat if you run into issues compiling again!

          2. 1

            Your reasons for spectrwm (and xmonad’s, etc. model) is exactly the reason I use tiling window managers like i3, exwm and StumpWM: I don’t like that dynamic at all ;-)

            No accounting for different tastes.

            Is there a name for those two different tiling models?

            1. 1

              automatic vs manual?

              1. 1

                I’ve seen the terms static (for when the containers have to be created by the user) vs dynamic used.

                ArchLinux seems to call them dynamic vs manual. See the management style column https://wiki.archlinux.org/title/Comparison_of_tiling_window_managers

            2. 1

              I was also quite lost with the way tiling works at the beginning. There is not much resource around this subject. It seems people just get used to it and avoid creating these useless containers. I am lucky, it was my case.

            1. 25

              Nope. I would say that client TLS certificates are the most underused browser feature.

              1. 5

                Mercifully so! Client certs are a UX disaster.

                1. 7

                  I don’t find that to be true, but even if it were that’s a reason to invest in the UX, not abandon the tech.

                2. 4

                  Huh – I was thinking control-Q.

                  1. 4

                    Sadly not, as it’s right next to control-W.

                    1. 2

                      I always need to download a Firefox add-on to fix that screwup.

                      Why is this still a thing?

                      1. 1

                        I’ll hazard a guess that it comes from Mac OS where ⌘W is Close Window (or close tab if there are tabs, for the past decade or two) because it’s next to ⌘Q, Quit an application.

                        1. 1

                          There’s a setting you can toggle in about:config to disable this behavior: https://bugzilla.mozilla.org/show_bug.cgi?id=52821#c315

                          1. 1

                            Wow, that’s quite the discussion and it started 21 years ago!

                  1. 5

                    I admit to having mostly skimmed the presentation, on account of not having had my coffee yet, but this reminded me of Dylan a little, and it’s probably no coincidence considering Dylan’s history. Anyone remember that?

                    https://en.wikipedia.org/wiki/Dylan_(programming_language)

                    E.g. a fibonacci implementation lifted right off that page because I never was fluent in Dylan, and the last time I even tried to be was like fifteen years ago:

                    define function factorial (n :: <integer>) => (n! :: <integer>)
                      case
                        n < 0     => error("Can't take factorial of negative integer: %d\n", n);
                        n = 0     => 1;
                        otherwise => n * factorial(n - 1);
                      end
                    end;
                    

                    Granted, Dylan is (was?) explicitly typed and integrated a bunch of other paradigms and so on, it was very much a child of the 1990s. But in some ways it was a Lisp without all the parentheses. It’s an interesting precedent.

                    1. 1

                      I mainly remember Dylan because one of (the main? the only?) people working on it were in comp.lang.lisp when Usenet still was a thing.

                      Dylan never really attracted me because I actually like SEXPs and languages with all that semi-colon & braces line-noise annoy me, although Python is an okey compromise.

                      1. 2

                        @brucem was a developer on it (still is?). He’s probably who you were thinking of.

                        1. 1

                          Yes, that was him, thanks.

                        2. 2

                          Oh, yeah. Dylan was a nice and fun language but I never really learned it, because I never really felt like I needed a programming language with all the good parts of Lisp, minus the Lisp syntax. The Lisp syntax is one of the good parts of Lisp in my book :-D.

                      1. 2

                        One of the best features of cron is its automatic email

                        I don’t run e-mail on any of my machines (attack surface) and Cron is the only program that would need it, so this hasn’t been Cron’s best feature for decades for me. I’d rather Cron log to the system facilities like any other program.

                        1. 1

                          You mean you don’t send e-mail out from any of your machines?

                          1. 1

                            My e-mail client connects to an external IMAP server and besides cron that wants to send e-mail there’s nothing else on my machines that needs it.

                        1. 7

                          I use Gnus, which is a news/mail reader in Emacs. It does threading and quoting correctly and display of HTML messages pretty well.

                          In the past I went through a period of Mutt usage, and found the tutorial by Steve Losh at https://stevelosh.com/blog/2012/10/the-homely-mutt/ to be very helpful

                          If you prefer Vim I think Mutt is a very good option. It can be configured to render HTML via lynx or w3m IIRC (I think it’s mentioned in the above tutorial) ISTR Losh’s preferred mutt keys are also quite vimmish

                          1. 2

                            I also use Gnus. I found it very strange to begin with, but got used to it. The splitting and scoring features are incredibly powerful, especially for high-traffic mailing lists

                            1. 1

                              Gnus here as well, but I blew away all the keybindings[1] and have a few of my own (that are intuitive to me having a history of mutt, pine, etc. and also because I use Evil mode).

                              I use Gnus over other Emacs clients because it can do IMAP and so I do not need to depend on something like fetchmail (or whatever is popular these days), meaning it’s easy to bring up on different machines and platforms. UI-wise I’d prefer mu4e.

                              [1] Gnus is a bit vi-like in it’s keybindings: there are a lot of them and accidentally touching the wrong one might delete or kill something you did not want to lose. (Or at least, I could never figure out how to recover.)

                            1. 23

                              What I also find frustrating on macOS is the fact you need to download Xcode packages to get basic stuff such as Git. Even though I don’t use it, Xcode is bloating my drive on this machine.

                              We iOS developers are also not pleased with the size on disk of an Xcode installation. But you only need the total package if you are using Xcode itself.

                              A lighter option is to delete Xcode.app and its related components like ~/Library/Developer, then get its command line tools separately with xcode-select --install. Git is included; iOS simulators are not.

                              1. 7

                                I’m always surprised when I see people complain about how much space programs occupy on disk. It has been perhaps a decade since I even knew (off the top of my head) how big my hard drive was, let alone how much space any particular program required. Does it matter for some reason that I don’t understand?

                                1. 20

                                  Perhaps you don’t, but some of us do fill up our drives if we don’t stay on top of usage. And yes, Xcode is one of the worst offenders, especially if you need to keep more than one version around. (Current versions occupy 18-19GB when installed. It’s common to have at least the latest release and the latest beta around, I personally need to keep a larger back catalogue.)

                                  Other common storage hogs are VM images and videos.

                                  1. 4
                                    $ df -h / /data
                                    Filesystem      Size  Used Avail Use% Mounted on
                                    /dev/nvme0n1p6  134G  121G  6.0G  96% /
                                    /dev/sda1       110G   95G  9.9G  91% /data
                                    

                                    I don’t know how large XCode is; a quick internet search reveals it’s about 13GB, someone else mentioned almost 20GB in another comment there. Neither would not fit on my machine unless I delete some other stuff. I’d rather not do that just to install git.

                                    The MacBook Pro comes with 256GB by default, so my 244GB spread out over two SSDs isn’t that unusually small. You can upgrade it to 512GB, 1TB, or 2TB, which will set you back $200, $400, or $800 so it’s not cheap. You can literally buy an entire laptop for that $400, and quite a nice laptop for that $800.

                                    1. 6

                                      $800 for 2TB is ridiculous. If I had to use a laptop with soldered storage chips as my main machine, I’d rather deal with an external USB-NVMe adapter.

                                      1. 2

                                        I was about to complain about this, but actually check first (for a comment on the internet!) and holy heck prices have come down since I last had to buy an ssd

                                      2. 1

                                        I guess disk usage can be a problem when you have to overpay for storage. On the desktop I built at home my Samsung 970 EVO Plus (2TB NVMe) cost me $250 and the 512GB NVMe for OS partition was $60. My two 2TB HDDs went into a small Synology NAS for bulk/slow storage.

                                      3. 4

                                        It matters because a lot of people’s main machines are laptops, and even at 256 GB (base storage of a macbook pro) and not storing media or anything, you can easily fill that up.

                                        When I started working I didn’t have that much disposable income, I bought an Air with 128GB, and later “upgraded” with an sd card slot 128gb thing. Having stuff like xcode (but to be honest even stuff like a debug build of certain kinds of rust programs) would take up _so much space. Docker images and stuff are also an issue, but at least I understand that. Lots of dev tools are ginoromous and it’s painful.

                                        “Just buy a bigger hard drive from the outset” is not really useful advice when you’re sitting there trying to do a thing and don’t want to spend, what, $1500 to resolve this problem

                                        1. 1

                                          I don’t know. Buying laptops for Unix and Windows (gaming) size hasn’t really been an issue since 2010 or so? These days you can buy at least 512GB without make much of a dent in the price. Is Apple that much more expensive?

                                          (I’ll probably buy a new one this year and would go with at least a 512GB SSD and 1TB HDD.)

                                          1. 3

                                            Apple under-specs their entry level machines to make the base prices look good, and then criminally overcharges for things like memory and storage upgrades.

                                            1. 1

                                              Not to be too dismissive but I literally just talked about what I experienced with my air (that I ended up using up until…2016 or so? But my replacement was still only 256GB that I used up until last year). And loads of people buy the minimum spec thing (I’m lucky enough now to be able to upgrade beyond my needs at this point tho)

                                              I’m not lying to prove a point. Also not justifying my choices, just saying that people with small SSDs aren’t theoretical

                                        2. 1

                                          Yup, it’s actually what is written on the homebrew website and what I used at first.

                                        1. 12

                                          Is this corroborated by vulnerability counts in the respective browsers? TFA links to reports that

                                          [Firefox] gets routinely exploited by Law Enforcement

                                          and

                                          If you are in any way at risk, you should be using Chrome, no matter how much Firefox has improved.

                                          Be _very_ wary of anyone who tells you that Firefox security is comparable to that of Chrome.

                                          Which I’m not inclined to doubt these per se, but they’re undefended and I’m not familiar with the people who made those claims.

                                          1. 4

                                            Be very wary of anyone who tells you that Firefox security is comparable to that of Chrome.

                                            Are we talking default installs here? A lot of people use Firefox because one can (and always could) install addons like NoScript, uBlock, etc. I wonder how comparable security is then.

                                            (Yes, these addons have become available for Chrome as well but I doubt they’re as integrated and pervasive and one is still at the mercy of Google allowing them.)

                                          1. 14

                                            Would have been nice to mention the type builtin, at least for bash, that helps newcomers distinguish between different kinds of commands:

                                            $ type cat
                                            cat is /usr/bin/cat
                                            $ type cd
                                            cd is a shell builtin
                                            $ type ls
                                            ls is aliased to `ls -Fh'
                                            
                                            1. 5

                                              Wow, I’ve been using Unix for most of my computing life (30 years?) and I didn’t know about type.

                                              1. 1

                                                It is great to find duplicates in your PATH: type - all Shows you all places where exists

                                              2. 2

                                                I use which as opposed to type and it seems to do the exact same thing.

                                                1. 9

                                                  You should use type instead. More than you ever wanted to know on why:

                                                  https://unix.stackexchange.com/questions/85249/why-not-use-which-what-to-use-then

                                                  1. 1

                                                    Interesting. As a long time DOS user, I expected type to behave like cat. I typically use which as if it is just returning the first result from whereis, e.g. xxd $(which foo) | vim -R -. I didn’t know about the csh aliases, because the last time I used csh was in the nineties when I thought that since I use C, surely csh is a better fit for me than something whose name starts with a B, which clearly must be related to BCPL.

                                                    1. 1

                                                      I did not know about type and after knowing about it for 15 seconds now I almost completely agree with you. The only reason you could want to use which is to avoid complicating the readlink $(which <someprogram>) invocation on guix or nixos systems. That is; which is still useful in scripts that intend to use the path, type has an output of the form <someprogram> is <path to someprogram>.

                                                      Edit: OK I followed a link from the article to some stackoverflow that goes through the whole bonanza of these scripts and I think whereis <someprogram> is probably better than readlink $(which <someprogram>).

                                                      1. 3

                                                        @ilmu type -p will return just the path.

                                                        1. 2

                                                          Two problems with whereis: 1) it’s not available everywhere, and 2) it can return more than one result, so you have to parse its output. So for that use case I’ll probably stick with which until someone points me at a simple program that does the same thing without the csh aliases.

                                                    2. 1

                                                      Interesting. In fish shell, type gives you the full definition of the function for built-ins that are written in fish, and builtin -n lists all the bultins. There’s a surprising about of fish code around the cd builtin.

                                                    1. 10

                                                      Print debugging can produce logs that I can view after the process in question is long dead and gone. Nothing wrong with richer debugging tools, but there’s no need for it to be an exclusive relationship.

                                                      1. 7

                                                        This is also true for the debugging tools he talks about (rr, pernosco). They create a recording that can be stepped through interactively.

                                                        1. 1

                                                          Print debugging also works in (pretty much) all programming languages while these discussions usually center on C(++), GDB and their ilk.

                                                        1. 1

                                                          I’ve had similar functionality more than 10 years ago on my jail-broken iPhone 3GS, which might have been the most solid mobile phone I’ve had so far. (And I say this as someone who’s not very enthusiastic about Apple in general.) I used that phone for years, but only because it was twice as useful when jail-broken (and IMHO more secure, because of the OpenSnitch-like firewall and also because a PDF exploit got fixed in a day while Apple took it’s sweet time)).

                                                          1. 2

                                                            Ironically, BTC’s adoption by larger organizations has been to pay for ransomware attacks. Even though only 45% of attacks end up with a payout for the ransomer, organizations have been holding on to BTC just in case

                                                            https://blog.emsisoft.com/en/33977/is-ransomware-driving-up-the-price-of-bitcoin/

                                                            1. 1

                                                              “Only 45%”? I’m surprised it is that high.

                                                              1. 2

                                                                An F(x)tec Pro1 running Sailfish OS somewhat gets you there these days.

                                                                I test drove one for a month but in the end I did not use the keyboard that much and then it’s just a big and clunky device. Virtual keyboards these days are 90% good enough for me.

                                                                But to each his own and I can imagine someone longing for a N900 would really enjoy the device and OS.

                                                                1. 3

                                                                  As long as it is something that I do not need to uninstall, like PulseAudio, to get working sound it’s fine I guess.

                                                                  1. 4

                                                                    Yesterday I came across Python’s Rich library again (probably here on Lobsters) and I’ve had my eye on it for a while. So I thought I’d try to use it from Common Lisp using the Py4CL bridge, which I had never used before either.

                                                                    Py4CL appeared to be a pleasure to use and I ended up with this:

                                                                    1. 3

                                                                      I’m not entirely convinced a new model is needed. We already have memory mapped files in all the major operating systems. And file pages can already be as small as 4KiB, which is tiny compared to common file sizes, these days. Perhaps it would make sense to have even smaller pages for something like Opteron, but do we really need to rethink everything? What would we gain?

                                                                      1. 4

                                                                        What we’d gain is eliminating 50+ years of technical debt.

                                                                        I recommend the Twizzler presentation mentioned a few comments down. It explains some of the concepts much better than I can. These people have really dug into the technical implications far deeper than me.

                                                                        The thing is this: persistent memory blows apart the computing model that has prevailed for some 60+ years now. This is not the Von Neumann model or anything like that; it’s much simpler.

                                                                        There are, in all computers for since about the late 1950s, a minimum of 2 types of storage:

                                                                        • primary storage, which the processor can access directly – it’s on the CPUs’ memory bus. Small, fast, volatile.
                                                                        • secondary storage, which is big, slow, and persistent. It is not on the memory bus and not in the memory map. It is held in blocks, and the processor must send a message to the disk controller, ask for a particular block, wait for it to be loaded from 2y store and place into 1y store.

                                                                        The processor can only work on data in 1y store, but everything must be fetched from it, worked on, and put back.

                                                                        This is profoundly limiting. It’s slow. It doesn’t matter how fast the storage is, it’s slow.

                                                                        PMEM changes that. You have RAM only RAM, but some of your RAM keeps its contents when the power is off.

                                                                        Files are legacy baggage. When all your data is in RAM all the time, you don’t need files. Files are what filesystems hold; filesystems are an abstraction method for indexing blocks of secondary storage. With no secondary storage, you don’t need filesystems any more.

                                                                        1. 7

                                                                          I feel like there are a bunch of things conflated here:

                                                                          Filesystems and file abstractions provide a global per-device namespace. That is not a great abstraction today, where you often want a truly global namespace (i.e. one shared between all of your devices) or something a lot more restrictive. I’d love to see more of the historical capability systems research resurrected here: for typical mobile-device UI abstractions, you really want a capability-based filesystem. Persistent memory doesn’t solve any of the problems of naming and access. It makes some of them more complicated: If you have a file on a server somewhere, it’s quite easy to expose remote read and write operations, it’s very hard to expose a remote mmap - trying to run a cache coherency protocol over the Internet does not lead to good programming models.

                                                                          Persistence is an attribute of files but in a very complicated way. On *NIX, the canonical way of doing an atomic operation on a file is to copy the file, make your changes, and then move the old file over the top. This isn’t great and it would be really nice if you could have transactional updates over ranges of files (annoyingly, ZFS actually implements all of the machinery for this, it just doesn’t expose it at the ZPL). With persistent memory, atomicity is hard. On current implementations, atomic operations with respect to CPU cache coherency and atomic operations with respect to committing data to persistent storage are completely different things. Getting any kind of decent performance out of something that directly uses persistent memory and is resilient in the presence of failure is an open research problem.

                                                                          Really using persistent memory in this way also requires memory safety. As one of the The Machine developers told me when we were discussing CHERI: with persistent memory, your memory-safety bugs last forever. You’ve now turned your filesystem abstractions into a concurrent GC problem.

                                                                          1. 1

                                                                            Excellent points; thank you.

                                                                            May I ask, are you the same David Chisnall of “C is not a low-level language” paper? That is probably my single most-commonly cited paper. My compliments on it.

                                                                            Your points are entirely valid, and that is why I have been emphasizing the “just for fun” angle of it. I do not have answers to some of these hard questions, but I think that at first, what is needed is some kind of proof of concept. Something that demonstrates the core point: that we can have a complex, rich, capable environment that is able to do real, interesting work, which in some ways exceeds the traditional *nix model for a programmer, which runs entirely in a hybrid DRAM/PMEM system, on existing hardware that can be built today.

                                                                            Once this point has been made by demonstration, then perhaps it will be possible to tackle much more sophisticated systems, which provide reliability, redundancy, resiliency, and all that nice stuff that enterprises will pay lots of money for.

                                                                            There is a common accusation, not entirely unjust, that the FOSS community is very good at imitating and incrementally improving existing implementations, but not so good at creating wholly new things. I am not here to fight that battle. What I was trying to come up with was a proposal to use some existing open technology – things that are already FOSS, already out there, and not new and untested and immature, but solid, time-proven tools that have survived despite decades in obscurity – and assemble them into something that can be used to explore new and largely uncharted territory.

                                                                            ISTM, based on really very little evidence at all, that HPE got carried away with the potential of someting that came out of their labs. It takes decades to go from a new type of component to large-scale highly-integrated mass production. Techies know that; marketing people do not. We may not have competitive memristor storage until the 2030s at the earliest, and HPE wanted to start building enterprise solutions out of it. Too much, too young.

                                                                            Linux didn’t spring fully-formed from Torvalds’ brow ready to defeat AIX, HP-UX and Solaris in battle. It needed decades to grow up.

                                                                            The Machine didn’t get decades.

                                                                            Smalltalk has already had decades.

                                                                            1. 1

                                                                              Reply notifications are working again, so I just saw this!:

                                                                              May I ask, are you the same David Chisnall of “C is not a low-level language” paper? That is probably my single most-commonly cited paper. My compliments on it.

                                                                              That’s me, thanks! I’m currently working on a language that aims to address a lot of my criticisms of the C abstract machine.

                                                                              Something that demonstrates the core point: that we can have a complex, rich, capable environment that is able to do real, interesting work, which in some ways exceeds the traditional *nix model for a programmer, which runs entirely in a hybrid DRAM/PMEM system, on existing hardware that can be built today.

                                                                              I do agree with the ‘make it work, make it correct, make it fast’ model, but I suspect that you’ll find with a lot of these things that the step from ‘make it work’ to ‘make it correct’ is really hard. A lot of academic OS work fails to make it from research to production because they focus on making something that works for some common cases and miss the bits that are really important in deployment. For persistent memory systems, how you handle failure is probably the most important thing.

                                                                              With a file abstraction, there’s an explicit ‘write state for recovery’ step and a clear distinction in the abstract machine between volatile and non-volatile storage. I can quite easily do two-phase commit to a POSIX filesystem (unless my disk is lying about sync) and end up with something that leaves my program in a recoverable state if the power goes out at any point. I may lose uncommitted data, but I don’t lose committed data. Doing the same thing with a single-level store is much harder because caches are (as their name suggests) hidden. Data that’s written back to persistent memory is safe, data in caches isn’t. I have to ensure that, independent of the order that things are evicted from cache, my persistent storage is in a consistent state. This is made much harder on current systems by the fact that atomic with respect to other cores is done via the cache coherency protocol, whereas atomic with respect to main memory (persistent or otherwise) is done via cache evictions and so guaranteeing that you have a consistent view of your data structures with respect to both other cores and persistent storage is incredibly hard.

                                                                              The only systems that I’ve seen do this successfully segregated persistent and volatile memory and provided managed abstractions for interacting with it. I particularly like the FaRM project from some folks downstairs.

                                                                              There is a common accusation, not entirely unjust, that the FOSS community is very good at imitating and incrementally improving existing implementations, but not so good at creating wholly new things.

                                                                              I think there’s some truth to that accusation, though I’m biased from having tried to do something very different in an open source project. It’s difficult to get traction for anything different because you start from a position of unfamiliarity when trying to explain to people what the benefits are. Unless it’s solving a problem that they’ve hit repeatedly, it’s hard to get the message across. This is true everywhere, but in projects that depend on volunteers it is particularly problematic.

                                                                              ISTM, based on really very little evidence at all, that HPE got carried away with the potential of someting that came out of their labs. It takes decades to go from a new type of component to large-scale highly-integrated mass production. Techies know that; marketing people do not. We may not have competitive memristor storage until the 2030s at the earliest, and HPE wanted to start building enterprise solutions out of it. Too much, too young.

                                                                              That’s not an entirely unfair characterisation. The Machine didn’t depend on memristers though, it was intended to work with the kind of single-level store that you can build today and be ready to adopt memrister-based memory when it became available. It suffered a bit from the same thing that a lot of novel OS projects do: they wanted to build a Linux compat layer to make migration easy, but once they have a Linux compat layer it was just a slow way of running Linux software. One of my colleagues likes to point out that a POSIX compatibility layer tends to be the last piece of native software written for any interesting OS.

                                                                          2. 4

                                                                            I think files are more than just an abstraction over block storage, they’re an abstraction over any storage. They’re crucial part of the UX as well. Consider directories… Directories are not necessary for file systems to operate (it could just all be flat files) but they exist, purely for usability and organisation. I think even in the era of PMEM users will demand some way to organise information and it’ll probably end up looking like files and directories.

                                                                            1. 2

                                                                              Most mobile operating systems don’t expose files and directories and they are extremely popular.

                                                                              1. 3

                                                                                True, but those operating systems still expose filesystems to developers. Users don’t necessarily need to be end users. iOS and Android also do expose files and directories to end users now, although I know iOS didn’t for a long time.

                                                                                1. 3

                                                                                  iOS also provides Core Data, which would be a better interface in the PMEM world anyway.

                                                                                  1. 2

                                                                                    True, but those operating systems still expose filesystems to developers.

                                                                                    Not all of them don’t, no.

                                                                                    NewtonOS didn’t. PalmOS didn’t. The reason being that they didn’t have filesystems.

                                                                                    iOS is just UNIX. iOS and Android devices are tiny Unix machines in your pocket. They have all the complexity of a desktop workstation – millions of lines of code in a dozen languages, multiuser support, all that – it’s just hidden.

                                                                                    I’m proposing not just hiding it. I am proposing throwing the whole lot away and putting something genuinely simple in its place. Not hidden complexity: eliminating the complexity.

                                                                                  2. 2

                                                                                    They tried. Really hard. But in the end, even Apple had to give up and provide the Files app.

                                                                                    Files are an extremely useful abstraction, which is why they were invented in the first place. And why they get reinvented every time someone tries to get rid of them.

                                                                                    1. 4

                                                                                      Files (as a UX and data interchange abstraction) are not the same thing as a filesystem. You don’t need a filesystem to provide a document abstraction. Smalltalk-80 had none. (It didn’t have documents itself, but I was on a team that added documents and other applications to it.) And filesystems tend to lack stuff you want for documents, like metadata and smart links and robust support for updating them safely.

                                                                                      1. 1

                                                                                        I’m pretty sure the vast majority of iOS users don’t know Files exist.

                                                                                        I do, but I almost never use it.

                                                                                      2. 1

                                                                                        And extremely limiting.

                                                                                1. 2

                                                                                  I rent a dedicated server for aprox 50 euro a month (i7-6700k, 32GB ram):

                                                                                  Mastodon, Pleroma, tt-rss, bookmarks (linkding), matrix + bridges, website, peertube, calendar/contacts (radicalie)

                                                                                  VMs: e-mail (openbsd) some matrix bridges

                                                                                  1. 1

                                                                                    Mind sharing where you’re renting your server from? Looks like you got a good deal.

                                                                                    1. 3

                                                                                      Until recently I had a server at prgmr.com which I think offers really good prices.

                                                                                      Nothing against it, I just got a better deal at someone I know ;-) https://openbsd.amsterdam/

                                                                                      I just host contacts and calendar for the whole family and run my 24/7 programs there.

                                                                                      1. 2

                                                                                        It looks like hetzner’s dedicated offerings. (same cpu spec)

                                                                                        1. 2

                                                                                          Hetzner. They have regular auctions for servers. I moved everything off of 4 Vultr VMs over to a single Hetzner dedicated. I even paid for the USB stick so I could install Void Linux on it. (There is network KVM access but you have to reserve it in 2 hour blocks. I only needed it for the install).

                                                                                      1. 9

                                                                                        Many times in my career I had the freedom of choice in regards to language and technology to use in a project.

                                                                                        Even though I know Lisp (to some extent), whenever I tried to prototype what I needed in Common Lisp or whatever, the lack of libraries, documentation and community, was basically killing my attempt.

                                                                                        Sure, if I lived forever like a Strugatski brothers character, I could write my own libraries, package manager, wikis and whatnot. But I don’t.

                                                                                        Whenever I read an article like this I get excited again about the Lisp prospect, but then I remember the past!

                                                                                        1. 10

                                                                                          I’ll just be the voice that says “my experience is different”. I have prototyped and run in production© programs in Common Lisp. They involve(d) connecting to a SOAP service, fetching an API, sending data to a FTP, reading an SQLite DB and showing data on a website (using Sentry in production), selling products with Stripe, sending emails with Sendgrid. I only wrote two straightforward libraries (string manipulation, a 10-lines interface to Sendgrid’s API), and I learned a lot by writing documentation. I can’t write Python nor use its buggy ecosystem anymore! (just last week, a pip 20.0 bug made it impossible to rollback to 19.3 or upgrade to 20.0.1 with pip itself, it fuck** my venv)

                                                                                          edit: helpful links:

                                                                                          1. 6

                                                                                            Same here, I always revert to CL when prototyping or doing exploratory programming that end up as programs that stay around. Also at work, for talking to Jira for example.

                                                                                            It really depends on your environment and needs, but libraries, documentation and community have rarely been an issue and I don’t think Clojure is going to help the OP in this regard if it is a problem for CL.

                                                                                            I’m always curious why CL is so divisive in this respect (leaving out the community). Why are libraries for some such an issue? Even if something is missing it is easy to make something solid yourself.

                                                                                            Hell, I even made two mobile apps recently.

                                                                                            1. 3

                                                                                              Same. I did trading bots, distributed embedded systems, signal processing in CL. I see the issue of “insufficient ecosystem” once in a while on the Internet and always puzzled what is that people do to have this problem.

                                                                                              If I had to single out a clear weakness it’d be the native GUI, but outside of that…

                                                                                              1. 1

                                                                                                So, where do you find your libraries then?

                                                                                                1. 1

                                                                                                  Most of them are in quicklisp (a popular repo system for CL) and are a one line away from installation. There’s tons of stuff really.

                                                                                                  1. 1

                                                                                                    Hi, start with this list, sufficiently curated to be useful: https://github.com/CodyReichert/awesome-cl

                                                                                                    1. 1

                                                                                                      Can’t speak for original poster, but I’ve been surprised how much I’ve found with just a Google search with “common lisp” appended. To be more concrete, when I went looking for them, I found a nice ORM with an SQL generator, a Twitter API client, Vulcan bindings, etc. I can believe that there are some gaps though. It feels a bit like there was a lull in activity in the late 2000 early 2010s, and then perhaps a more recent pickup? You definitely find code that’s five years old, and code that was updated in the last week, and not much in between. I think discoverability may have taken a hit when Quickdocs shut down last year – Quickref doesn’t have a built-in text search, although honestly it still pops up in those Google searches.

                                                                                                      The libraries are mostly by individual coders or small groups, though, rather than corporation-supported frameworks. I would say it’s a bit more discoverable and usable than, say, the Haskell ecosystem, though of course nothing like Perl/Python/Node levels of availability. I like Quicklisp as a package management system, though CLPM is an intriguing new development.

                                                                                                2. 8

                                                                                                  In this case you could probably give Clojure a try. While I disagree with some points from this article (static typing and TCO among them) Clojure definitely has the libraries (or rather, wrappers), package manager, wiki and whatnot.

                                                                                                  1. 4

                                                                                                    I tried doing some Clojure/Clojurescript stuff, and I found that Clojure kind of lacked “big picture” documentation around its core APIs. There are docs for functions, but it’s often very minimal and so when you’re looking at a new API it’s hard to figure out the core concepts.

                                                                                                    I think a lot of the concepts have high-level guides somewhere, but it’s kind of all over the place. I think Clojure targets people more patient/“better” than me. But I think, for example, that Clojure’s edn package docs could look more like Python’s pickle package docs and everyone would benefit (I’m counting the edn github post as part of the package docs).

                                                                                                    Though, to your point, you can always “downgrade to Java” super smoothly when you want third-party stuff. And the stack, you know, works (the biggest problem I’ve had as a beginner was around tooling but hey, I use Python and JS for a living, you have to make things pretty busted to scare me)

                                                                                                1. 10

                                                                                                  Great writeup, lots of practical advice. I’ve dabbled in similar projects in the past and it worked well when I had efficient software for the things I wanted to do. The sharp edge I kept hitting over and over again is modern websites with lots of javascript. On the one hand advanced web applications are the saviour of the Linux desktop user—an opportunity to interoperate in the same services as your closed source peers—but on a 10–15 year-old CPU and with few GB of RAM it just sucks. If you enjoy playing with NoScript all day maybe it’s okay but I’m not willing to go that far. Until my usage patterns change I’m stuck on the treadmill even for my non-work computing.

                                                                                                  1. 6

                                                                                                    Isn’t this overstating the impact of NoScript a bit? You generally turn it on for a website you want to use that doesn’t work with JavaScript disabled and that’s it. It works from then on.

                                                                                                    1. 2

                                                                                                      IME NoScript’s impact varies greatly depending on how much the thing you’re using spread its code out over different domains & CDNs.

                                                                                                      1. 1

                                                                                                        For many “modern”, “user-friendly” websites, the JavaScript bits that are actually useful and have a functional impact over the website are actually only a (n often very tiny) subset of all the JS code that runs when you open that website. Lots of the JS code that runs is advertising and/or tracking code that you can disable and still not lose functionality. Once all that crap is turned off, 10 year-old CPUs can sometimes handle a page just fine, but you do have to fiddle with it a little and… yeah.

                                                                                                        Oops, I don’t think that makes a difference – I think I was misremembering how NoScript works. I haven’t used it in a while.

                                                                                                        1. 1

                                                                                                          Do you have an example of a page that you think doesn’t work well on 10+ age cpu? I have a cpu that was not the latest back in 2007 and there are a few tasks where it starts to show (such as multiparty videoconference in 4k or unaccelerated webgl vr) but haven’t had issues with run of the mill webapps yet.

                                                                                                      2. 1

                                                                                                        i’m curious what you consider “a few GB” – I’ve never had more than 4 and always seems like more than enough for the web?

                                                                                                      1. 1

                                                                                                        IMO the online resources for CL have improved greatly the last couple of years. I mean, common-lisp.net is looking nice, what a revolution!

                                                                                                        And the resources and tooling in general have improved. People have less and less ground to bash CL :)

                                                                                                        • the stack is hard to setup? Portacle.
                                                                                                        • lack of an editor besides Emacs? There are good plugins for Atom (SLIMA), VSCode, Sublime Text, Jupyter Notebooks, Vim (of course), and there are more choices.
                                                                                                        • lack of GUI libraries? The Cookbook shows decent ones, though accordingly there is no Qt5 wrapper (but I hear it’s coming…)
                                                                                                        • lack of libraries? Awesome-cl at least helps the discovery, and py4cl is good.
                                                                                                        • CL is not typed? (I hear this and “CL is not a compiled language” once in the while) The Cookbook shows gradual typing.
                                                                                                        • Quicklisp’s monthly releases model is too slow? Ultralisp.
                                                                                                        • lack of a flagship software? I see three, ladies and gents: pgloader, ScoreCloud and Nyxt! (asides more technical and older ones, of course)

                                                                                                        We saw 2 new CL books published in 2020, implementations are active, Google (I mean, Google, woaaa) still uses and hacks on SBCL, we saw four job announces in a month on reddit… something’s happening, smart guys should jump in!

                                                                                                        1. 2

                                                                                                          There is a QT5 wrapper but it is specific to ECL: https://gitlab.com/eql/EQL5