1. 6

    This reminds me of https://motherfuckingwebsite.com/ and http://bettermotherfuckingwebsite.com/. I tend to use something similar to the 2nd website when mocking out websites in projects. A similar option is classless CSS where there’s styling, but it’s limited to more standard html elements rather than creating new elements out of divs and classes. There’s value to both, but sometimes it’s nice to go back to a more minimal approach.

    I really wish there were more popular brutalist websites.

    1. 7

      brutalist websites

      I don’t think I’ve ever seen a website try to load so many images simultaneously.

      1. 4

        100%. Perfect use-case for some lazy-load scripting.

        1. 1

          Most of the sites seem pretty complex to me.

          This is a site where people can submit their work, and it’s pretty clear that “brutalist” means different things to different people (just like the architectural style).

          1. 1

            Hah, I forgot about that - good callout - I’ve had that page bookmarked for a while and just opened it for a second to make sure it was what I was thinking of.

            Brutalism does seem to mean different things for different people, but at least for me, I like when sites don’t overuse css/js but are still completely functional (See GoatCounter for an idea of what I’m thinking of). The brutalist websites page is definitely a good example where an aggressive focus on minimalism hurts the usability of the site. Everything good still needs to be in moderation.

            1. 1

              931 MB. Impressive.

              1. 1

                Thank you so much for measuring this. I really wanted to but also really didn’t want to.

          1. 21

            This hits on a bugbear of mine where people say “just do X and then just do Y, simple enough”; emphasis on the word “just”. It’s almost never just a matter of “just” and a lot of seemingly trivial tasks end up down rabbit holes where a bulk of the venturing could have been avoided by a slightly lengthier planning process, or “thinking more about it” process if one is averse to the concept of planning.

            Reminds of the (unattributed?) saying:

            Weeks of coding can save you hours of planning

            1. 17

              To be fair, sometimes, hours of coding can save you weeks of planning as well.

              1. 5

                Wholly agreed. My own experience has unfortunately gravitated more towards the “just this and just that” mentality.

              2. 4

                You’re not alone - I reference this piece often when people overuse “just”. https://alistapart.com/blog/post/the-most-dangerous-word-in-software-development/

                1. 6

                  At some point it occurred to me that whenever I used the word “just” in a code review comment, it was a sign that I wasn’t being as constructive and helpful as possible. Now when I catch myself using that word, I stop and consider whether my comment could come off as condescending or flippant even if that wasn’t my conscious intent. The answer is “yes” often enough to keep me vigilant about the word.

                2. 2

                  Ha ha ha - had not heard that one. As a marketer, I always roll my eyes when people say (and they often do!) “can’t they just build that in, like, 2 hours?”

                1. 2

                  A good friend of mine just released a live swing/jazz album that I helped record and mix, so probably listening to that on repeat for a while.

                  I also plan on playing a bunch of Forza Horizon 5. I sunk a ton of time into Forza Horizon 4 and just need a weekend where I don’t really do anything of consequence.

                  1. 18

                    Katie McLaughlin gave a really good talk at Kiwi Pycon (and PyCascades where I saw it) where she dives into why all of the oddities in wat do what they do. It’s a bit longer, but definitely worth a watch. https://www.youtube.com/watch?v=AxJB2sWMS-k

                    That being said, I tend to view wat as a fun palate cleanser, more than a super serious talk. I also keep around a number of talks and articles which I tend to view in a similar light. I like revisiting them every once in a while when I’m having a bad day.

                    1. 1
                      • Traveling to NYC for work
                      • Working on my comically over-engineered chat/bot system to make progress on v2, a rewrite I hope to eventually run as a service
                      • Cleaning up go-irc/irc for a v4 release
                      • Deciding what to do about the base16 theme repos (and an owner who has been absent for quite a while)
                      1. 4

                        lib/pq: An early Postgres frontrunner in the Go ecosystem. It was good for its time and place, but has fallen behind, and is no longer actively maintained.

                        Latest release was 6 days ago: https://github.com/lib/pq/releases/tag/v1.10.3, so it seems that it is still maintained.

                        1. 6

                          The README explicitly says:

                          This package is currently in maintenance mode. For users that require new features or reliable resolution of reported bugs, we recommend using pgx which is under active development.

                          1. 8

                            “In maintenance mode” does not mean that it is not actively maintained.

                            1. 5

                              I would argue that the statement “Maintainers usually do not resolve reported issues” does mean it’s not actively maintained.

                              That being said, this is probably getting into the semantics of what “actively maintained” means. For me, it means there’s active development and that reported issues will be resolved, neither of which seem to be the case for lib/pq at the moment.

                        1. 2

                          Has anyone considered trying NTFS root filesystem yet? It might be an…. interesting alternative to partitioning for dual boots.

                          1. 3

                            I’m fairly certain it’s not possible due to different features between the filesystems - in particular no suid means sudo won’t work. I’m also not sure mapping to different users on Linux works properly, though I haven’t checked in a while.

                            1. 1

                              That can probably bevworked around with creative use of extended attributes, if someone really wants to do it.

                              1. 1

                                I’m pretty sure NTFS has something for setuid, since Interix supported it.

                                1. 9

                                  NTFS is a lot like BeFS: the folks talking to the filesystem team didn’t provide a good set of requirements early on and so they ended up with something incredibly general. NTFS, like BeFS, is basically a key-value store, with two ways of storing values. Large values can (as with BeFS) be stored in disk blocks, small values are stored in a reserved region that looks a little bit like a FAT filesystem (BeFS stores them in the inode structure for the file).

                                  Everything is layered on top of this. Compression, encryption, and even simple things like directories, are built on top of the same low-level abstraction. This means that you can take a filesystem with encryption enabled and mount it with an old version of NT and it just won’t be able to read some things.

                                  This is also the big problem for anything claiming to ‘support NTFS’. It’s fairly easy to support reading and writing key-value pairs from an NTFS filesystem but full support means understanding what all of the keys mean and what needs updating for each operation. It’s fairly easy to define a key-value pair that means setuid, but if you’re dual booting and Windows is also using the filesystem then you may need to be careful to not accidentally lose that metadata.

                                  I also don’t know how the NTFS driver handles file ownership and permissions. In a typical *NIX filesystem, you have an small integer UID combined with a two-byte bitmap of permissions. You may also have ACLs, but they’re optional. In contrast, NTFS exposes owners as UUIDs (much larger than a uid that any *NIX program understands) and has only ACLs (which are not expressed with the same verbs as NFSv4 or POSIX ACLs), so you need some translation layer and need to be careful that this doesn’t introduce incompatibilities with the Windows system.

                                  You’re probably better off creating a loopback-mounted ext4 filesystem as a file in NTFS and just mounting the Windows home directory, if you want to dual boot and avoid repartitioning.

                                  Note that WSL1 uses NTFS and provides Linux-compatible semantics via a filter driver. If someone wants to reverse engineer how those are stored (wlspath gives the place they live in the UNC filesystem hierarchy) then you could probably have a Linux root FS that uses the same representation as WSL and also uses the same place in the UNC namespace so that Windows tools know that they’re special.

                                  1. 1

                                    What is used by WSL 2?

                                    1. 5

                                      WSL2 is almost totally unrelated to WSL, it’s a Linux VM running on Hyper-V (I really wish they’d given WSL2 a different name). Its root FS is an ext4 block device (which is backed by a file on the NTFS file system). Shared folders are exported as 9p-over-VMBus from the host.

                                      This is why the performance characteristics of WSL and WSL2 are almost exactly inverted. WSL1 has slow root FS access because it’s an NTFS filesystem with an extra filter driver adding POSIX semantics but the perf accessing the Windows FS is the same because it’s just another place in the NTFS filesystem namespace. WSL2 has fast access to the root FS because it’s a native Linux FS and the Linux VM layer is caching locally, but has much slower access to the host FS because it gets all of the overhead of NTFS, plus the overhead of serialising to an in-memory 9p transport, plus all of the overhead of the Linux VFS layer on top.

                                      Hopefully at some point WSL will move to doing VirtIO over VMBus instead of 9p. The 9p filesystem semantics are not quite POSIX and the NTFS semantics are not 9p or POSIX, so you have two layers of impedance mismatch. With VirtIO over VMBus, the host could use the WSL interface to the NTFS filesystem and directly forward operations over a protocol that uses POSIX semantics.

                                      There are some fun corner cases in the WSL filesystem view. For example, if you enable developer mode then ln -s in WSL will create an NTFS symbolic link. If you disable developer mode then unprivileged users aren’t allowed to create symbolic links (I have no idea why) and so WSL creates an NTFS junction. Nothing on the system other than WSL knows what to do with a junction that refers to a file (the rest of Windows will only ever create junctions that refer to directories) and so will report the symlink as a corrupted junction. This is actually a pretty good example of the split between being able to store key-value pairs and knowing what they mean in NTFS: both WSL and other Windows tools use the same key to identify a junction but WSL puts a value in that nothing else understands.

                                      1. 1

                                        Actual Linux filesystems. Because it’s just a Linux kernel, in Hyper-V, with dipping mustards.

                                2. 2

                                  Why not go the other way around and boot Windows off of btrfs? :D

                                  1. 1

                                    This is only a proof of concept at this stage - don’t use this for anything serious.

                                    But really, why not, you have backups… right? :P

                                1. 67

                                  I love rebase.

                                  There are a couple things rebase enables that are really powerful which are, unfortunately, not possible in fossil.

                                  The first is a clean history.

                                  My commit history as I create it has no value to anybody else. I “finally got this bit working”, I go “This is close but I’m going to try a totally different approach now,” and I leave my computer for the day. All of these are valuable to me, but have no place in the long lived history of my source code. Why?

                                  A simple misconception. Commit history is not supposed to be how I think, but how the software committed evolved.

                                  I commit then run tests. Should I be committing the failed results and then committing the successful ones? Should I be cluttering my history with commits like “fix tests” since I commit all over? Or should I be producing nice, small, specific commits for specific features or specific points in my software’s progression towards its current form?

                                  Bisect means nothing if I have many small commits where I repeatedly broke and unbroke a feature. Bisect means a lot when I have a specific commit that makes a set of changes, or when I have a specific commit that fixes a different bug. It means nothing when I have to try “Hey, does this pass our CI as it stands?” (welcome to big-corp coding).

                                  So point by point:

                                  1. Yes! Rebase is dangerous! Don’t blindly use this command, know what it is you want at the end.
                                  2. Cleaning history so it becomes about the software and not about your brain is a new and useful feature.

                                  2.1) Nope, history all still there, just because you don’t know where the work started doesn’t mean you don’t know where the software gained the work.

                                  2.2) You can merge this way in git too. You can diff two commits in git too. And then you can rebase because again, it’s not about my brain but about the software.

                                  1. Siloed development? “Hey can you check this branch and it’s my branch I might clobber it later” is very different from “It’s my code and you can’t see it until it’s all done.” Master/trunk can’t be rebased. Everything else is fair game.
                                  2. So what? Do you really want my commit to show when the work was done at 2 PM instead of 10 AM?
                                  3. Who cares how a line of code came together, so long as the reason for it to exist (commit message) and a clean story for how it fit into the previously-existing project both exist?
                                  4. How your brain works is not so valuable it must be imprinted on your commit history.

                                  6.1) They were thinking “blargh.” Obviously. That’s why it’s an intermediate commit.

                                  6.2) Nothing wrong with small check-ins in a linear progression, rebased into complete commits that add things in small and appropriate ways.

                                  6.3) “blargh” “aargh” “fix the thing” “wtf is with java” “dude. stop” “I AM A ZYGON.” I’d rather a nice commit shaped by a rebase into being a useful object because….

                                  6.4) Cherry picks also work better with single commits that are useful, instead of five commits all that need to go together to bring a single feature across branches. Also, notably, commits with terrible useless messages. See 6.3.

                                  6.5) You want to back out just the testing fix? Or the whole feature while reconsidering how it fits in the existing code. Again, rebased commits for that clean history make this easier.

                                  1. Sure. Rebasing for a merge, maybe a cherry-pick in fossil’s model is actually better. Won’t argue with the local SCM semantics for performing a nice linear merge.
                                  2. Dishonest only if you think SCM is about the developer’s brain, and not the software.

                                  Really I worry the author had too much Enterprise coding experience, where all your work will now become a single commit fitting the Jira formatting rule and you have a multi-dozen line commit because that way the pre-CI checks can pass. I understand being in such a system and thinking rebase is to blame. Maybe your org should trust developers a little more, and spend more time saying “don’t say blargh” instead of “all one commit.”

                                  1. 27

                                    The best analogy of this I’ve come up with is your private work is like a lab book (meticulous, forensic) and the public/merged branches are the thesis (edited, diversions gone, to the point).

                                    1. 4

                                      As I started reading you comment and was convinced you had not read the article, but I see you did as you addressed points individually. The author does negate your first claim and shows the evidence. I think you mean that some things are not achievable the same way they are in git. Cleaner history is achievable in fossil in a way that is a superset of git, as explained in the article thoroughly.

                                      I am a git user and never used fossil. Git works fine for me and has proven to be a reliable and snappy VCS for me from day one. I don’t have any interest in move to fossil, nor am I a member of the group of people that advocate for changes in git, specially not on git’s principles. It works for me, the parts of it I dislike or would have built in a different way are acceptable choices by people who offered an immensely useful tool to the world. That isn’t to say that valid criticism doesn’t exist or that there aren’t things that could be solved better. I think this article very strongly proves that rebase is just a hacky workflow whose results could be achievable by resourcing to better designed functionality. The author did this masterfully, but on the other hands there is nothing wrong in having a workflow in muscle memory and use it. Even if said workflow relies on glitches or rough shortcuts.

                                      1. Do you not mean the opposite? They way I see it, it doesn’t make sense to call a tool ‘dishonest’, one could call it potentially confusing. But it does what it does, how is that possibly dishonest?

                                      Regardless personal opinions, the article was so clear, and explaining things so well with clear information and to such detail, that it was a joy to read. This is the mind of a great engineer at work in a way we don’t see so often these days.

                                      1. 3

                                        rebase is just a hacky workflow whose results could be achievable by resourcing to better designed functionality

                                        Argued with examples and suggestions which do not share the same assumptions. There may be a case for rebase being a hacky way to go about making changes to past/private commits, but it was not made in this post. Rather the case was made for any manipulating of past commits as technically and socially wrong.

                                        I understand fossil allows overlaying new information on past commits, however there comes a time for messing with actual commits, and not much lost when you change a parent commit.

                                      2. 3

                                        Maybe I’m missing something, but it seems like the author is specifically talking about git rebase and not git rebase --interactve (at least for the majority of the article). Many of their points are valid for the former, but this response seems to be speaking almost exclusively to the latter.

                                        That being said, I don’t think Fossil supports rewriting history in any form, so quite a few of your responses are critiques of Fossil, but not really the article. Similar to you, I’ll try to go through all the points and show what I think the original author was getting at. On a side note, I personally don’t think that rebasing all commits so the master branch is flat is very helpful, but some people seem to like it. In any sense, that specific use is what I’ll be speaking to because it seems to be what the article seems to be talking about.

                                        1. Everyone seems to agree on this, no sense speaking more about it.
                                        2. Raw git rebase is more an alternative to merging in prod than it is cleaning up the commits. Commit cleanup is often useful, while blindly rebasing on prod rather than merging it in isn’t always the best option.
                                          1. Your argument is saying “some data was lost, but everything is still there”. I have to agree with the original author on this one - a rebase drops the parent commit where the branch first came from, so all the history is not still there (or is purposefully misrepresented). Also see my response to #4.
                                          2. I think the point they were making is that the claimed benefit from rebasing (“rebasing provides better feature branch diffs”) can be easily achieved by other means - in this case, merging the parent branch back in to the feature branch. On a related note, there are very subtle, but potentially fairly dangerous, differences when you look at the diff from the HEAD to the feature branch without merging in prod, so either rebasing or merging in prod are 2 ways to solve this. That is what the graphics and table show.
                                        3. While I tend to view personal branches as potentially rewritten at any time, have you ever tried to base your branch on someone else’s when they’re using a rebase-based workflow? It’s a nightmare. Trying to get your changes to re-apply on top of their rebased changes often causes conflicts which are very hard to recover from.
                                        4. The issue is not “when was work done”, but “what was the order the work was done in”. Using a rebase workflow, you could easily end up with commits later in the history which were much earlier chronologically. This is extremely confusing if you’re trying to track down what actually happened.
                                        5. I’m not sure what you’re getting at here - fossil seems to allow amending commit messages to fix information or a mistake at a later date, you can’t do that in git without rebasing… and once something is in prod, that really shouldn’t happen. There have been many times when I’ve wanted to go back and add more information to a commit message (or fix a typo) after it was merged in.
                                        6. For these, I tend to agree with your response - this seems to be one of the only places where the Fossil article is speaking about an interactive rebase and I think they really miss the point.
                                        7. Not much to respond to here.
                                        8. From the original article, “Rebasing is an anti-pattern. It is dishonest. It deliberately omits historical information. It causes problems for collaboration. And it has no offsetting benefits.” I agree that rebasing is often an anti-pattern, but I’m purposefully excluding the modification of local commits to get a more useful history. Rewriting local history can definitely have benefits though, so I don’t think they’re completely right.

                                        I often wish Git’s UI was clearer - the multiple uses of “rebase” seems similar to the many things “checkout” can do. To make the distinction clearer in my head, I personally view git rebase --interactive as a git rewrite-history command. While it may share some of the internals of rebase, it has quite a different goal from the plain “rebase” action.

                                        I hope this helps shed some light on their opinions, even if you may not agree with all of it.

                                        TL;DR: there should be a distinction made between rebasing to keep a flat merge history and rewriting feature-branch commits to make them more useful. The first can cause quite a bit of confusion and lead to a more misleading history, while the second can be a very valuable tool.

                                        1. 1

                                          Interactive rebase should give you only those abilities available from the commandline, just with a nicer interface.

                                          I’m OK with fossil commits being append-only. That doesn’t bother me, I love the OpenCrux database which offers an immutable base. A similar thing for commits is an excellent idea.

                                          But so is modifying the stream of commits to match when commits hit mainline. And so is merging or splitting commits. And so is ordering the work not in how a spread-out team might complete it, with multiple parallel useless checkins a day, but with what matters long term: In what order did this work introduce regressions to the codebase.

                                          1. 1

                                            In general I agree with you - I like modifying the stream of commits, but I really only like doing it before they hit main… and I don’t like forcing main to be a straight line without merges. I was primarily trying to point out that most of their arguments focus on rebasing commits to maintain a straight line on main, and not on rewriting history for the sake of a clearer commit log. I think there is very little value to the former (most times), and plenty of value for the latter.

                                            Again, I really dislike how “rebase” has often been taken to mean “rewriting history” in git, because in a DAG, a rebase is a specific operation. It’s unclear which people are talking about during this conversation and I think some of the wires may have been crossed.

                                        2. 2

                                          I generally agree that git rebase is fine as long as one knows exactly one is doing and using the tool to make deliberate changes and improve the state of the project, but

                                          So what? Do you really want my commit to show when the work was done at 2 PM instead of 10 AM?

                                          2PM vs. 10AM is unlikely to matter, but it often matters whether it was yesterday or Thursday 2 weeks ago, which is before we had the meeting about X,Y,Z. I don’t go about memorizing the commit timestamps in my repositories, but I still find them useful occasionally. I wish we’d all be more careful about avoiding argument from lack of imagination in our debates.

                                          1. 5

                                            but it often matters whether it was yesterday or Thursday 2 weeks ago, which is before we had the meeting about X,Y,Z.

                                            Not long term, which is where SCM exists.

                                            Long term those distinctions turn into a very thin slice of time, and people forget about the discussions that happened outside the commit history. Thus all that remains is a commit message in a line.

                                        1. 25

                                          How has AGPL failed? Quoting the introduction to Google’s own policies, which are the top hit for “agpl google” on DuckDuckGo:

                                          WARNING: Code licensed under the GNU Affero General Public License (AGPL) MUST NOT be used at Google. The license places restrictions on software used over a network which are extremely difficult for Google to comply with.

                                          This seems like a resounding success of AGPL.

                                          Proprietary distributed systems frequently incorporate AGPL software to provide services.

                                          Who, and which software packages? This isn’t just about naming and shaming, but ensuring that those software authors are informed and get the chance to exercise their legal rights. Similarly, please don’t talk about “the legal world” without specific references to legal opinions or cases.

                                          I feel like this goes hand-in-hand with the fact that you use “open source” fourteen times and “Free Software” zero times. (The submitted title doesn’t line up with the headline of the page as currently written.) This shows an interest in the continued commercialization of software and exploitation of the commons, rather than in protecting Free Software from that exploitation.

                                          1. 8

                                            How has AGPL failed?

                                            I said how in the article:

                                            The AGPL was intended, in part, to guarantee this freedom to users, but it has failed. Proprietary distributed systems frequently incorporate AGPL software to provide services. The organizations implementing such systems believe that as long as the individual process that provides the service complies with the AGPL, the rest of the distributed system does not need to comply; and it appears that the legal world agrees.

                                            The purpose of the AGPL is not to stop commercial users of the software, it’s to preserve the four freedoms. It doesn’t preserve those freedoms in practice when it’s been used, so it’s a failure.

                                            But really AGPL doesn’t have anything to do with this. No-one claims AGPL is a license like the one I describe in the article.

                                            Proprietary distributed systems frequently incorporate AGPL software to provide services.

                                            Who, and which software packages?

                                            mongodb is a high-profile example, used by Amazon.

                                            1. 10

                                              I’m fairly certain that the version of mongodb that Amazon based DocumentDB off of was Apache licensed, so I don’t think that applies here. From what I’m seeing, they also explicitly don’t offer hosted instances of the AGPL licensed versions of Mongo.

                                              1. 5

                                                But really AGPL doesn’t have anything to do with this. No-one claims AGPL is a license like the one I describe in the article.

                                                Then your article headline is misleading, is it not?

                                                1. 2

                                                  This is true. MongoDB changed their license, in response to Amazon using forks of their own software to compete with them.

                                                  1. 1

                                                    That’s fair, you’re probably right. There are lots of other hosted instances of MongoDB that used the AGPL version though, so MongoDB is still the highest-profile example of this. That was the motivation for MongoDB’s move to SSPL.

                                                  2. 2

                                                    I don’t see a laundry list here. I appreciate that you checked your examples beforehand and removed those which were wrong, but now there’s only one example left. The reason that I push back on this so heavily is not just because I have evidence that companies shun AGPL, but because I personally have been instructed by every employer I’ve had in the industry that AGPL code is unacceptable in their corporate environment, sometimes including AGPL developer tools! It would have been grounds for termination at three different employers, including my oldest and my most recent employments.

                                                    Regarding MongoDB, I have no evidence that AWS violated the terms of the AGPL, and they appear to have put effort into respecting it somewhat. It seems that MongoDB’s owners were unhappy that their own in-house SaaS offering was not competing enough with others, and they chose licensing as their way to fight. Neither of these companies are good, but none of them appear to be disrespecting the AGPL.

                                                  3. 4

                                                    the agpl says

                                                    Notwithstanding any other provision of this License, if you modify the Program, your modified version must prominently offer all users interacting with it remotely through a computer network (if your version supports such interaction) an opportunity to receive the Corresponding Source of your version by providing access to the Corresponding Source from a network server at no charge, through some standard or customary means of facilitating copying of software.

                                                    in other words, you cannot run your own proprietary fork of an agpl program; you have to offer users the modified sources. it says nothing about the sources of the other programs that that program communicates with, or the network infrastructure and configuration that comprises your distributed system.

                                                    1. 3

                                                      Yes. This is how we define distributed systems, in some security contexts: A distributed system consists of a patchwork network with multiple administrators, and many machines which are under the control of many different mutually-untrusting people. Running AGPL-licensed daemons on one machine under one’s control does not entitle one to any control over any other machines in the network, including control over what they execute or transmit.

                                                      Copyright cannot help here, so copyleft cannot help here. Moreover, this problematic layout seems to be required by typical asynchronous constructions; it’s good engineering practice to only assume partial control over a distributed system.

                                                      1. 1

                                                        So, that seems fine then? What’s the problem with that?

                                                        1. 1

                                                          the problem, as the article says, is that we have no copyleft license that applies to entire distributed systems, and it would be nice to. not so much a problem with the agpl as an explanation of why it is not the license the OP was wishing for.

                                                          1. 4

                                                            I explain more upthread, but such licenses aren’t possible in typical distributed systems. Therefore we should not be so quick to shun AGPL in the hopes that some hypothetical better license is around the corner. (Without analyzing the author too much, I note that their popular work on GitHub is either MIT-licensed or using the GH default license, instead of licenses like AGPL which are known to repel corporate interests and preserve Free Software from exploitation.)

                                                            1. 2

                                                              We have Parity as a start. Its author says:

                                                              “Parity notably strengthens copyleft for development tools: you can’t use a Parity-licensed tool to build closed software. “

                                                              Its terms are copyleft for other software you “develop, operate, or analyze” with licensed software. That’s reads broad enough that anything an operator of distributed systems both owns and integrates should be open sourced.

                                                      1. 16

                                                        In addition to the license not actually being open source, the bot itself has a number of technical issues.

                                                        I’ve dealt with IRC quite a bit in the past - I maintain a Go IRC library at https://github.com/go-irc/irc, contributed to many others, written a rust IRC library, and one of my main personal projects has a large portion of code which was written to interact with IRC.

                                                        1. The posted bot doesn’t handle trailing arguments properly (some messages will be reported as starting with : when that is incorrect).
                                                        2. If a PING is sent during registration, it will not be able to connect. Many servers use this as a form of protection against some DoS attacks.
                                                        3. It only responds to PRIVMSG, not NOTICE or CTCP ACTIONs.
                                                        4. It only sends PRIVMSGs.
                                                        5. It is possible for multiple messages to be returned in one read, but this code can only handle one at a time.
                                                        6. Messages are not parsed properly. It is valid for lines to start with :. These are often “server messages”. Similarly, IRCv3 defines message tags, which are parsed when the line starts with @. It’s not technically correct, but I believe Twitch’s IRC implementation sends message tags even if the client doesn’t support receiving them. Because of this, some PING messages may be ignored because it’s valid for a server to send :some!user@server PING :SOMETHING.
                                                        7. If I’m nitpicking, there are completely unnecessary allocations. write! should be used in place of format! because then you don’t need to allocate a string, convert it to bytes, then write it.

                                                        It’s a good start to an interesting concept, but I would caution people against using this for both the license and the reasons outlined above.

                                                        1. 2

                                                          Yep, for sure some things to iron out, but the goal is simplicity. A quick look at go-irc and clearly it is much more complex and has dependencies. Which is understandable considering your projects and this project have different goals! ☺

                                                          All of those points could be changed if needed. But if you don’t need to handle particular cases a server presents, there is no point. Otherwise you can create a patch or branch to handle it.

                                                          1. 4

                                                            Yep, I understand that they have different goals. I really do like the idea of this project. Being able to write bots in a super low friction language while the core is in something like rust is an awesome goal. Also go-irc has no runtime dependencies - they’re only for testing. One of my explicit goals was to have no external deps.

                                                            Otherwise you can create a patch or branch to handle it.

                                                            Yes I have the knowledge to fix those things, but also under the current license, I can’t create a patch for it without first asking for permission. Additionally, this could be closed sourced tomorrow and I couldn’t do anything about it.

                                                            You are of course free to do what you want with your code, but many people (myself included) would be put off enough by the license to not bother.

                                                            1. 1

                                                              You are of course free to do what you want with your code, but many people (myself included) would be put off enough by the license to not bother.

                                                              Yep and I fully acknowledge this. My goal isn’t popularity, it’s building good software while not being taken for a ride.

                                                          2. 1

                                                            I wouldn’t call 3+4 technical issues unless they were planned features.

                                                            No one forces you to implement every part of the spec, and I find ‘only handling PRIVMSG’ a 100% valid thing to do for an irc bot. Also not implementing IRCv3 is fine.

                                                            If writing an irc client or claiming full protocol compliance you’re right, of course.

                                                            1. 2

                                                              Good point. 3+4 are definitely more features than technical issues.

                                                              Yes, you don’t have to implement what you’re not going to use. The 3 examples I picked (PRIVMSG, CTCP ACTION, and NOTICE) are the 3 most commonly used. I would like to see the user sending the message be passed to the shell script - it seems like you could do quite a bit with that.

                                                              Not implementing IRCv3 features isn’t that big of a deal (unless as mentioned above, you’re working with some very specific servers), but not handling the message prefix properly will come back and bite you later. Lots of the code here is very special-cased per-message - I would argue it’s much better (cleaner, less error prone, more bulletproof and easier to maintain) to parse messages in a standard way and handle them fully-parsed.

                                                              Just as an example, here’s my old irc crate with 1 runtime dep on thiserror (which could be pretty easily removed). It’s not that much more complex (especially if you remove the IRCv3 tag handling) and parsing messages into a Message type makes them much easier to deal with. It’s also not optimal (it could be using &str rather than String) but I felt that making it easier to use would be better than blazing fast performance for my use case.

                                                          1. 24

                                                            Openwrt all the way

                                                            1. 3

                                                              Same here, openwrt as main router and few dumb ap for wireless

                                                              1. 3

                                                                What do you have for a dumb AP?

                                                                I’m in the market for something that I can broadcast two ssids (guest and home) and have them on separate vlans.

                                                              2. 3

                                                                With what kind of hardware?

                                                                1. 4

                                                                  Not the OP, but in my case a NetGear R7800. Does 802.11ac, has dual radios so you can run 2.4GHz & 5Ghz simultaneously. 4+1 gigabit ethernet ports with a half decent switch behind them that can do tagged vlans.

                                                                  1. 1

                                                                    I’m still using an old tplink archer c7. Probably gonna do an upgrade in the next year or so to get wifi 6. Pretty sure it was something like $80 back in 2014 or 2015.

                                                                    1. 1

                                                                      Not the OP, but I use a Linksys WRT1900ACS. A tad pricy, or was when I got it, but the wifi is good, has native support for OpenWRT, and and it’s fast enough to handle gigabit fiber.

                                                                    2. 2

                                                                      I’ve used openwrt in the past for single router/AP setups, but as far as I’m aware for larger properties it wouldn’t be enough, unless I’m misunderstanding something. Is it possible to use OpenWRT with multiple APs?

                                                                      1. 3

                                                                        It is possible, either as an 802.11s mesh or with a number of wired access points set up in bridge mode. I’m currently using the latter and it works fine.

                                                                    1. 15

                                                                      Ubiquiti is the most recommended, but this happened recently: https://krebsonsecurity.com/2021/03/whistleblower-ubiquiti-breach-catastrophic/

                                                                      I am not sure the others are much better, but just FYI in case you haven’t seen it yet.

                                                                      1. 20

                                                                        It’s worth noting though that (as of yet, hope it stays that way) you do not need to use their cloud offerings or even have an account with them. It may seem simpler for some use cases, but any cloud offering inherently involves you placing data under the control of a third party. They do try to push this on people though (as do most other vendors, too…), which I don’t like all that much. Being in control of my own data and infrastructure is important to me.

                                                                        You can host your own controller, both on the internet or locally, or use their “setup app” which AFAIK emulates just enough of a controller on your phone to set up a single AP with a simple config. Once configured, you can even switch the controller off and let them run autonomously.

                                                                        You should also not (I hope this does not need saying, but saying it anyway) expose your access points directly to the internet (allowing them to be accessed from outside of your network). UBNT also sells other HW (like security cameras) where this is more commonly done, but even there I’d advise against it. Use a VPN if you need direct access. I know of no single IoT/HW vendor with a “clean” or even “acceptable” security track record.

                                                                        Just as a precaution I would recommend putting some form of additional authentication (eg. basic auth, which suprisingly even works with their L3 controller) before the controller when hosting it in a publicly accessible place though. It’s likely not the greatest piece of software either, as evidenced by it’s weird dependency requirements…

                                                                        All in all I’m not saying UBNT can do no wrong, but there’s certainly worse offerings in the market regarding security as well as features.

                                                                        1. 2

                                                                          You should also not (I hope this does not need saying, but saying it any way) expose your access points directly to the internet (allowing them to be accessed from outside of your network).

                                                                          Yep absolutely!

                                                                          So I saw in the HN discussion that you now need cloud authentication even for self hosted software now.

                                                                          See the main discussion here: https://old.reddit.com/r/Ubiquiti/comments/kslyh9/cloud_key_local_account/

                                                                          I will admit I was confused by this because I know others that haven’t had to do cloud auth for their self hosted setups so maybe it’s required with the latest update only?

                                                                          1. 4

                                                                            I got a USG on discount to play around with it and it doesn’t seem like you need to do cloud auth even when I set it up a few weeks ago, though they really push you towards that. I think you have to click “Advanced Setup”, choose “Local account”, and not enable remote access or something like that.

                                                                            1. 3

                                                                              I don’t cloud auth with them. Local accounts all day long on a dedicated vm operating as a controller.

                                                                            2. 2

                                                                              I don’t know about the Cloud Key specifically (I mean, it says ‘cloud’ right on the box), but last I checked and updated you could still download the controller installation packages directly, both from their download site as well as their apt repositories. Running that did not require any cloud setup. Hope that didn’t change :(

                                                                            3. 1

                                                                              tbh I’ve had one access point from them and their software package for management was so hilariously outdated, I returned it instantly

                                                                            4. 2

                                                                              Yes, this is one of the big things that’s making me question using them. I added your link to the original post so there’s more context.

                                                                              1. 1

                                                                                The hardware is still really quite good. I’m using a pair of U6-LRs at home (one in the attic, one in the basement) with a couple of US-8-60W switches, and the controller on a Raspberry Pi, and the coverage and performance are fantastic. Router is an EdgeRouter-4, which is Ubiquiti but not UniFi, so it doesn’t play with the same configuration tools, but I’m really happy with what it gives me too (rock solid, the config tree stuff really works, but also it’s a “real” linux system) so I’m not touching it.

                                                                                As with others here, I’m not using cloud management — it’s a pretty damn cool feature in some scenarios but I don’t have a need for it.

                                                                            1. 20

                                                                              I love plain text protocols, but … HTTP is neither simple to implement nor neither fast to parse.

                                                                              1. 7

                                                                                Yeah the problem of parsing text-based protocols in an async style has been floating around my head for a number of years. (I prefer not to parse in the async or push style, but people need to do both, depending on the situation.)

                                                                                This was motivated by looking at the nginx and node.js HTTP parsers, which are both very low level C. Hand-coded state machines.

                                                                                I just went and looked, and this is the smelly and somewhat irresponsible code I remember:


                                                                                /* Proxied requests are followed by scheme of an absolute URI (alpha).

                                                                                • All methods except CONNECT are followed by ‘/’ or ‘*’.

                                                                                I say irresponsible because it’s network-facing code with tons of state and rare code paths, done in plain C. nginx has had vulnerabilities in the analogous code, and I’d be surprised if this code didn’t.

                                                                                Looks like they have a new library and admit as much:


                                                                                Let’s face it, http_parser is practically unmaintainable. Even introduction of a single new method results in a significant code churn.

                                                                                Looks interesting and I will be watching the talk and seeing how it works!

                                                                                But really I do think there should be text-based protocols that are easy to parse in an async style (without necessarily using Go, where goroutines give you your stack back)

                                                                                Awhile back I did an experiment with netstrings, because length-prefixed protocols are easier to parse async than delimiter-based protocols (like HTTP and newlines). I may revisit that experiment, since Oil will likely grow netstrings: https://www.oilshell.org/release/0.8.7/doc/framing.html

                                                                                OK wow that new library uses a parser generator I hadn’t seen:



                                                                                which does seem like the right way to do it: do the inversion automatically, not manually.

                                                                                1. 4

                                                                                  Was going to say this. Especially when you have people misbehaving around things like Content-Length, Transfer-Encoding: chunked and thus request smuggling seems to imply it’s too complex. Plus, I still don’t know which response code is appropriate for every occasion.

                                                                                  1. 2

                                                                                    Curious what part of HTTP you think is not simple? And on which side (client, server)

                                                                                    1. 5

                                                                                      There’s quite a bit. You can ignore most of it, but once you get to HTTP/1.1 where chunked-encoding is a thing, it starts getting way more complicated.

                                                                                      • Status code 100 (continue + expect)
                                                                                      • Status code 101 - essentially allowing hijacking of the underlying connection to use it as another protocol
                                                                                      • Chunked transfer encoding
                                                                                      • The request “method” can technically be an arbitrary string - protocols like webdav have added many more verbs than originally intended
                                                                                      • Properly handling caching/CORS (these are more browser/client issues, but they’re still a part of the protocol)
                                                                                      • Digest authentication
                                                                                      • Redirect handling by clients
                                                                                      • The Range header
                                                                                      • The application/x-www-form-urlencoded format
                                                                                      • HTTP 2.0 which is now a binary protocol
                                                                                      • Some servers allow you specify keep-alive to leave a connection open to make more requests in the future
                                                                                      • Some servers still serve different content based on the User-Agent header
                                                                                      • The Accept header

                                                                                      There’s more, but that’s what I’ve come up with just looking quickly.

                                                                                      1. 3

                                                                                        Would add to this that it’s not just complicated because all these features exist, it’s very complicated because buggy halfway implementations of them are common-to-ubiquitous in the wild and you’ll usually need to interoperate with them.

                                                                                        1. 1

                                                                                          And, as far as I know, there is no conformance test suite.

                                                                                          1. 1

                                                                                            Ugh, yes. WPT should’ve existed 20 years ago.

                                                                                        2. 2

                                                                                          Heh, don’t forget HTTP/1.1 Pipelining. Then there’s caching, and ETags.

                                                                                      2. 2

                                                                                        You make a valid point. I find it easy to read as a human being though which is also important when dealing with protocols.

                                                                                        I’ve found a lot of web devs I’ve interviewed have no idea that HTTP is just plain text over TCP. When the lightbulb finally goes on for them a whole new world opens up.

                                                                                        1. 4

                                                                                          It’s interesting to note that while “original HTTP” was plain text over TCP, we’re heading toward a situation where HTTP is a binary protocol run over an encrypted connection and transmitted via UDP—and yet the semantics are still similar enough that you can “decode” back to something resembling HTTP/1.1.

                                                                                          1. 1

                                                                                            UDP? I thought HTTP/2 was binary over TCP. But yes, TLS is a lot easier thanks to ACME cert issues and LetsEncrypt for sure.

                                                                                            1. 2

                                                                                              HTTP/3 is binary over QUIC, which runs over UDP.

                                                                                        2. 1

                                                                                          SIP is another plain text protocol that is not simple to implement. I like it and it is very robust though. And it was originally modeled after HTTP.

                                                                                        1. 5

                                                                                          Yet the new fs API doesn’t support contexts at all, so you have to embed them in your fs struct.

                                                                                          1. 9

                                                                                            The io package in general doesn’t support contexts (io.Reader and io.Writer are examples of this), and they’re a bit of a pain to adapt. In particular, file IO on Linux is blocking, so there’s no true way to interrupt it with a context. Most of the adapters I’ve seen use ctx.Deadline() to get the deadline, but even that isn’t good enough because a context can be cancelled directly. I’d imagine that’s why it’s not in the fs interfaces.

                                                                                            For every Reader/Writer that doesn’t support Context directly, you need a goroutine running to adapt it which is not ideal. There is some magic you can do with net.Conn (or rather *net.TCPConn) because of SetDeadline, but even those would need a goroutine and other types like fs.File (and *os.File) would leave the Read or Write running in the background until it completes which opens you up to all sorts of issues.

                                                                                            1. 1

                                                                                              You can you use the fs.FS interface it to implement something like a frontend for WebDav, “SSH filesystem”, and so forth, where the need for a context and timeouts is a bit more acute than just filesystem access. This actually also applies to io.Reader etc.

                                                                                            2. 1

                                                                                              I’m not entirely sure why the new fs package API doesn’t support contexts, but you could potentially write a simple wrapper for that same API which does, exposing a new API with methods like WithContext, maybe?

                                                                                              Especially considering what the documentation for context states:

                                                                                              Contexts should not be stored inside a struct type, but instead passed to each function that needs it.

                                                                                              But ideally I’d like to see context supported in the new fs package too.

                                                                                            1. 5

                                                                                              Concurrency in Go has bitten me so many times, but after getting the hang of channels, WaitGroups, and when to mix in more traditional concurrency primitives like mutexes, it’s also helped me in other languages - the async frameworks in Rust often use mpsc (multi-producer, single-consumer) channels which work similarly to Go’s channels. The idea of avoiding starting threads for every separate task has helped for performance in Rust as well.

                                                                                              But none of those are easy concepts. Concurrency is not simple or easy to reason about and race conditions will often pop up when you least expect. I agree with your idea that Go is not an easy language to use effectively or master, but I would argue that the learning curve is still lower than something like C or C++ or even Rust.

                                                                                              As a corollary to all of the above; learning the language isn’t just about learning the syntax to write your ifs and fors; it’s about learning a way of thinking.

                                                                                              I love that quote. There’s definitely a certain something missing when someone unfamiliar with Go writes Go code… and some of it is hard to put into words. It’s the same with most languages too - Rust, Python, Lisp, C, etc.

                                                                                              It works a little bit differently, but this is probably how I would have accomplished the limited workers example: https://play.golang.org/p/knuktiN0jIs The downside here is that if a task fails it has a chance to kill an entire worker. The upside is that you only start 3 goroutines rather than n.

                                                                                              1. 3

                                                                                                Similar to someone else in this thread, I’ve also got a toy IRC bot called Seabird with a bunch of inside jokes and random tools… it somehow evolved into a gRPC chat framework supporting Discord, IRC, and Minecraft with plugins which bridges channels (even across networks), show URL previews, add a bunch of usable commands, and includes a number of in-jokes.

                                                                                                I have a ton of toy projects though, that one is just the most developed.

                                                                                                Other projects:

                                                                                                • go-gemini - a gemini server and client library. Currently working on porting my blog to also be served via gemini.
                                                                                                • emacs-grayscale-theme - an experimental emacs theme with minimal colors (I’ve got a bunch of themes, but I’d consider this one mostly a toy)
                                                                                                • gitdir - a self-contained git SSH server
                                                                                                • zsh-utils - a minimal set of zsh plugins which make the out of the box experience better
                                                                                                • toolbox - an in progress convenience lib used for writing opinionated http services in go.
                                                                                                • yeet - an end-to-end encrypted file-uploader, still in progress
                                                                                                1. 1

                                                                                                  My main project this week is cleaning up my go-gemini library and hopefully starting to contact other devs of gemini libraries in go to pool efforts. There have been quite a few forks, each of which only seemed to happen because other libraries weren’t updated much.

                                                                                                  I’ve also got a rust project I’m hoping to re-write using better design patterns now that I’ve learned the language a bit better.

                                                                                                  Finally, I’ve got a bunch of blog post ideas. If I have any extra time, I’ll start on a draft on either the SSH protocol for dummies, or getting started with gemini.

                                                                                                  1. 1

                                                                                                    Thanks for putting this together! We went over Structure and Interpretation of Computer Programs in my programming languages course, but I’ll definitely add some of these others to my list of “eventually” books.

                                                                                                    Also wanted to note that the ISBN for the first book is off. It looks like it may have been copied from the second in the list.

                                                                                                    1. 1

                                                                                                      Thanks for catching the ISBN error! Fixed.

                                                                                                    1. 18

                                                                                                      What this rant does not focus on: It’s a good thing that these usecases are broken. Wayland prohibits your desktop applications from capturing keystrokes or recording other apps’ screens by default. X’s security model (and low level graphics APIs) is/are severely outdated, and Wayland promises not only to be more secure, but also expose cleaner APIs at the lower level (rendering, etc.)

                                                                                                      These usecases are/will still be supported though, but this time via standardized interfaces, many of which already exist and are implemented in today’s clients.

                                                                                                      X is based on a 30 year old code base and an outdated model (who runs server-side display servers these days?). Of course, switching from X to Wayland will break applications, and until they are rewritten with proper Wayland support they will stay that way. For most X11 apps there even is Xwayland, which allows you to run X11 apps in Wayland if you must.

                                                                                                      1. 27

                                                                                                        What this rant does not focus on: It’s a good thing that these usecases are broken

                                                                                                        You should have more compassion for users and developers who have applications that have worked for decades, are fully featured, and are being asked to throw all of that away. For replacements that are generally very subpar. With no roadmap when party will be reached. For a system that does not offer any improvements they care about (you may care about this form of security, not everyone does).

                                                                                                        I could care less about whether when I run ps I see Xorg or wayland. And I doubt that most of the people who are complaining really care about x vs wayland. They just don’t want their entire world broken for what looks to them like no reason at all.

                                                                                                        1. 5

                                                                                                          I’m not saying that those apps should be thrown away immediately. Some of these work under XWayland (I sometimes stream using OBS and it records games just fine).

                                                                                                          If your application really does not run under XWayland, then run an X server! X is not going to go away tomorrow, rather it is being gradually replaced.

                                                                                                          I’m simply explaining that there are good reasons some applications don’t work on Wayland. I’m a bit sore of hearing “I switched to Wayland and everything broke” posts: Look behind the curtain and understand why they broke.

                                                                                                        2. 17

                                                                                                          I’m kind of torn on the issue.

                                                                                                          On the one hand, the X security model is clearly broken. Like the UNIX security model, it assumes that every single application the user wants to run is 100% trusted. It’s good that Wayland allows for sandboxing, and “supporting the use cases, but this time via standardized interfaces” which allow for a permission system sounds good.

                                                                                                          On the other hand, there’s clearly no fucking collaboration between GNOME and the rest of the Wayland ecosystem. There’s a very clear rift between the GNOME approach which uses dbus for everything and the everything-else approach which builds wayland protocol extensions for everything. There doesn’t seem to be any collaboration, and as a result, application authors have to choose between supporting only GNOME, supporting everything other than GNOME, or doing twice the work.

                                                                                                          GNOME also has no intention of ever supporting applications which can’t draw their own decorations. I’m not opposed to the idea of client-side decorations, they’re nice enough in GTK applications, but it’s ridiculous to force all the smaller graphics libraries which just exist to get a window on the screen with a GL context - like SDL, GLFW, GLUT, Allegro, SFML, etc - to basically reimplement GTK just to show decorations on GNOME on Wayland. The proposed solution is libdecorations, but that seems to be at least a decade away from providing a good, native-feeling experience.

                                                                                                          This isn’t a hate post. I like Wayland and use Sway every day on my laptop. I like GNOME and use it every day on my desktop (though with X because nvidia). I have written a lot of wayland-specific software for wlroots-based compositors. But there’s a very clear rift in the wayland ecosystem which I’m not sure if we’ll ever solve. Just in my own projects, I use the layer-shell protocol, which is a use-case GNOME probably won’t ever support, and the screencopy protocol, which GNOME doesn’t support but provides an incompatible dbus-based alternative to. I’m also working on a game which uses SDL, which won’t properly support GNOME on Wayland due to the decorations situation.

                                                                                                          1. 13

                                                                                                            the X security model is clearly broken

                                                                                                            To be honest I feel the “brokenness” of the security model is vastly overstated. How many actual exploits have been found with this?

                                                                                                            Keyloggers are a thing, but it’s not like Wayland really prevents that. If I have a malicious application then I can probably override firefox to launch something that you didn’t intend (via shell alias, desktop files) or use some other side-channel like installing an extension in ~/.mozilla/firefox, malicious code in ~/.bashrc to capture ssh passwords, etc. Only if you sandbox the entire application is it useful, and almost no one does that.

                                                                                                            1. 10

                                                                                                              This isn’t a security vulnerability which can be “exploited”, it’s just a weird threat model. Every single time a user runs a program and it does something to their system which they didn’t want, that’s the security model being “exploited”.

                                                                                                              You might argue that users should never run untrusted programs, but I think that’s unfair. I run untrusted programs; I play games, those games exist in the shape of closed-source programs from corporations I have no reason to trust. Ideally, I should be able to know that due to the technical design of the system, those closed source programs can’t listen to me through my microphone, can’t see me through my webcam, can’t read my keyboard inputs to other windows, and can’t see the content in other windows, and can’t rummage through my filesystem, without my expressed permission. That simply requires a different security model than what X and the traditional UNIX model does.

                                                                                                              Obviously Wayland isn’t enough on its own, for the reasons you cite. A complete solution does require sandboxing the entire application, including limiting what parts of the filesystem it can access, which daemons it can talk to, and what hardware it can access. But that’s exactly what Flatpak and Snaps attempts to do, and we can imagine sandboxing programs like Steam as well to sandbox all the closed source games. However, all those efforts are impossible as long as we stick with X11.

                                                                                                              1. 3

                                                                                                                Every single time a user runs a program and it does something to their system which they didn’t want, that’s the security model being “exploited”.

                                                                                                                If you think a permission system is going to solve that, I going to wish you good luck with that.

                                                                                                                Ideally, I should be able to know that due to the technical design of the system, those closed source programs can’t listen to me through my microphone, can’t see me through my webcam, can’t read my keyboard inputs to other windows, and can’t see the content in other windows, and can’t rummage through my filesystem, without my expressed permission.

                                                                                                                Ah yes, and those closed-source companies will care about this … why exactly?

                                                                                                                They will just ask for every permission and won’t run otherwise, leaving you just as insecure as before.

                                                                                                                But hey, at least you made the life of “trustworthy” applications worse. Good job!

                                                                                                                But that’s exactly what Flatpak and Snaps attempts to do […]

                                                                                                                Yes, letting software vendors circumvent whatever little amount of scrutiny software packagers add, that will surely improve security!

                                                                                                                1. 7

                                                                                                                  If you think a permission system is going to solve that, I going to wish you good luck with that.

                                                                                                                  It… will though. It’s not perfect, but it will prevent software from doing things without the consent of the user. That’s the goal, right?

                                                                                                                  You may be right that some proprietary software vendors will just ask for every permission and refuse to launch unless given those permissions. Good. That lets me decide between using a piece of software with the knowledge that it’ll basically be malware, or not using that piece of software.

                                                                                                                  In reality though, we don’t see a lot of software which takes this route from other platforms which already have permission systems. I’m not sure I have ever encountered a website, Android app or iOS app which A) asked for permissions to do stuff it obviously didn’t need, B) refused to run unless given those permissions, and C) wasn’t obviously garbage.

                                                                                                                  What we do see though is that most apps on the iOS App Store and websites on the web, include analytics packages which will gather as much info on you as possible and send it back home as telemetry data. When Apple, for example, put the contacts database behind a permission wall, the effect wasn’t that every app suddenly started asking to see your contacts. The effect was that apps stopped snooping on users’ contacts.

                                                                                                                  I won’t pretend that a capability/permission system is perfect, because it isn’t. But in the cases where it has already been implemented, the result clearly seems to be improved privacy. I would personally love to be asked for permission if a game tried to read through my ~/.ssh, access my webcam or record my screen, even if just to uninstall the game and get a refund.

                                                                                                                  Yes, letting software vendors circumvent whatever little amount of scrutiny software packagers add, that will surely improve security!

                                                                                                                  I mean, if you wanna complain about distros which use snaps and flatpaks for FOSS software, go right ahead. I’m not a huge fan of that myself. I’m talking about this from the perspective of running closed source software or software otherwise not in the repos, where there’s already no scrutiny from software packagers.

                                                                                                                  1. 3

                                                                                                                    There’s probably evidence from existing app stores on whether users prefer to use software that asks for fewer permissions. There certainly seems to be a market for that (witness all the people moving to Signal).

                                                                                                                    1. 3

                                                                                                                      But hey, at least you made the life of “trustworthy” applications worse. Good job!

                                                                                                                      “Trustworthy software” is mostly a lie. Every application is untrustworthy after it gets remotely exploited via a security bug, and they all have security bugs. If we lived in a world without so much memory-unsafe C, then maybe that wouldn’t be true. But we don’t live in that world so it’s moot.

                                                                                                                      Mozilla has its faults, but I trust them enough to trust that Firefox won’t turn on my webcam and start phoning home with the images. I could even look at the source code if I wanted. But I’d still like Firefox sandboxed away from my webcam because Firefox has memory bugs all the time, and they’re probably exploitable. (As does every other browser, of course, but I trust those even less.)

                                                                                                                    2. 1

                                                                                                                      A complete solution does require sandboxing the entire application, including limiting what parts of the filesystem it can access, which daemons it can talk to, and what hardware it can access. But that’s exactly what Flatpak and Snaps attempts to do

                                                                                                                      But that’s quite limited sandboxing, I think? To be honest I’m not fully up-to-speed with what they’re doing exactly, but there’s a big UX conundrum here because write access to $HOME allows side-channels, but you also really want your applications to do $useful_stuff, which almost always means accessing much (or all of) $HOME.

                                                                                                                      Attempts to limit this go back a long way (e.g. SELinux), and while this works fairly well for server applications, for desktop applications it’s a lot harder. I don’t really fancy frobbing with my config just to save/access a file to a non-standard directory, and for non-technical users this is even more of an issue.

                                                                                                                      So essentially I don’t really disagree with:

                                                                                                                      I should be able to know that due to the technical design of the system, those closed source programs can’t listen to me through my microphone, can’t see me through my webcam, can’t read my keyboard inputs to other windows, and can’t see the content in other windows, and can’t rummage through my filesystem, without my expressed permission. That simply requires a different security model than what X and the traditional UNIX model does.

                                                                                                                      and I’m not saying that the Wayland model isn’t better in theory (aside from some pragmatical implementation problems, which should not be so casually dismissed as some do IMHO), but the actual practical security benefit that it gives you right now is quite limited, and I think that will remain the case for the foreseeable future as it really needs quite a paradigm shift in various areas, which I don’t really seeing that happening on Linux any time soon.

                                                                                                                      1. 2

                                                                                                                        I don’t really fancy frobbing with my config just to save/access a file to a non-standard directory

                                                                                                                        If a standard file-picker dialog were used, it could be granted elevated access & automatically grant the calling application access to the selected path(s).

                                                                                                                        1. 1

                                                                                                                          there’s a big UX conundrum here because write access to $HOME allows side-channels, but you also really want your applications to do $useful_stuff, which almost always means accessing much (or all of) $HOME.

                                                                                                                          This is solved on macOS with powerboxes. The Open and Save file dialogs actually run as a separate process and update the application’s security policy dynamically to allow it to access files that the user has selected, but nothing else. Capsicum was designed explicitly to support this kind of use case, it’s a shame that NIH prevented Linux from adopting it.

                                                                                                                          1. 1

                                                                                                                            This sounds like a good idea! I’d love to see that in the X11/Wayland/Unix ecosystem, even just because I hate that awful GTK file dialog for so many reasons and swapping it out with something better would make my life better.

                                                                                                                            Still; the practical security benefit I – and most users – would get from Wayland today would be very little.

                                                                                                                      2. 5

                                                                                                                        I think “broken” is too loaded; “no longer fit for purpose” might be better.

                                                                                                                        1. 2

                                                                                                                          Well, the security model is simply broken.

                                                                                                                          I agree that a lot of focus is put on security improvements compared to Wayland’s other advantages (tear-free rendering being the one most important to me). But it’s still an advantage over X, and I like software which is secure-by-default.

                                                                                                                          1. 1

                                                                                                                            How many actual exploits have been found with this?

                                                                                                                            They were very common in the ‘90s, when folks ran xhost +. Even now, it’s impossible to write a secure password entry box in X11, so remember that any time you type your password into the graphical sudo equivalents that anything that’s currently connected to your X server could capture it. The reason it’s not exploited in the wild is more down to the fact that *NIX distros don’t really do much application sandboxing and so an application that has convinced a user to run it already has pretty much all of the access that it needs for anything malicious that it wants to do. It’s also helped by the fact that most *NIX users only install things from trusted repositories where it’s less likely that you’ll find malware but expect that to change if installing random snap packages from web sites becomes common.

                                                                                                                          2. 4

                                                                                                                            It’s good that Wayland allows for sandboxing

                                                                                                                            If I wanted to sandbox an X application, I’d run it on a separate X server. Maybe even an Xnest kind of thing.

                                                                                                                            I’ve never cared to do this (if I run xnest it is to test network transparency or new window managers or something, not security), so I haven’t tried, but it seems to me it could be done fairly easily if someone really wanted to.

                                                                                                                            1. 2

                                                                                                                              Whoa, I’ve never heard about the GNOME issues (mostly because I’m in a bubble including sway and emersion, and what they do looks sensible to me). That sucks though, I hope they somehow reconcile.

                                                                                                                              Regarding Nvidia I think Simon mentioned something that hinted at them supporting something that has to do with Wayland, but I could just as easily have misunderstood.

                                                                                                                            2. 8

                                                                                                                              Wayland prohibits your desktop applications from capturing keystrokes or recording other apps’ screens by default

                                                                                                                              No, it doesn’t. Theoretically it might enable doing this by modifying the rest of the system too, but in practice (and certainly the default environment) it is still trivial for malware to keylog and record screen on current Wayland desktop *nix installs.

                                                                                                                              1. 3

                                                                                                                                it is still trivial for malware to keylog and record screen on current Wayland desktop *nix installs.

                                                                                                                                I don’t think that’s true. The linked article says recording screens and global hotkeys is “broken” by Wayland. How can it be so trivial for “malware” to do something, and absolutely impossible for anyone else?

                                                                                                                                Or is this malware that requires I run it under sudo?

                                                                                                                                1. 10

                                                                                                                                  It’s the difference between doing something properly and just doing it. Malware is happy with the latter while most non malware users are only happy with the former.

                                                                                                                                  There are numerous tricks you can use if you are malware, from using LD_PRELOAD to inject code and read events first (since everyone uses libwayland this is really easy), to directing clients to connect to your mitm Wayland server, to just using a debugger, and so on and so forth. None of these are really Wayland’s fault, but the existence of them means there is no meaningful security difference on current desktops.

                                                                                                                                  1. 2

                                                                                                                                    I don’t know if I agree that the ability to insert LD_PRELOAD in front of another application is equivalent to sending a bytestring to a socket that is already open, but at least I understand what you meant now.

                                                                                                                                2. 5

                                                                                                                                  I’m sick of this keylogger nonsense.

                                                                                                                                  X11 has a feature which allows you to use the X11 protocol to snoop on keys being sent to other applications. Wayland does not have an equivalent feature.

                                                                                                                                  Using LD_PRELOAD requires being on the other side of an airtight hatch. It straight-up requires having arbitrary code execution, which you can use to compromise literally anything. This is not Wayland’s fault. Wayland is a better lock for your front door. If you leave your window open, it’s not Wayland’ fault when you get robbed.

                                                                                                                                  1. 7

                                                                                                                                    Indeed, it’s not waylands fault, and I said as much in response to the only reply above yours, an hour and 20 minutes before you posted this reply. You’re arguing against a straw man.

                                                                                                                                    What is the case is that that “airtight hatch” between things that can interact with wayland and things that can do “giant set of evil activities” has been propped wide open pretty much everywhere on desktop linux, and isn’t reasonably easy to close given the rest of desktop software.

                                                                                                                                    If you were pushing “here’s this new desktop environment that runs everything in secure sandboxes” and it happened to use wayland there would be the possibility of a compelling security argument here. Instead what I see is people making this security argument in a way that could give people the impression it secures things when it doesn’t actually close the barn doors, which is outright dangerous.

                                                                                                                                    In fact, as far as I know the only desktop *nix OS that does sandbox everything thing is QubesOS, and it looks like they currently run a custom protocol on top of an X server…

                                                                                                                                    1. 3

                                                                                                                                      Quoting you:

                                                                                                                                      Wayland prohibits your desktop applications from capturing keystrokes or recording other apps’ screens by default

                                                                                                                                      No, it doesn’t.

                                                                                                                                      Yes, it does. Wayland prohibits Wayland clients from using Wayland to snoop on other Wayland clients. X11 does allow X11 clients to use X11 to snoop on other X11 clients.

                                                                                                                                      Other features of Linux allow you to circumvent this within the typical use-case, but that’s a criticism of those features moreso than of Wayland, and I’m really tired of it being trotted out in Wayland discussions. Wayland has addressed its part of the problem. Now it’s on the rest of the ecosystem to address their parts. Why do you keep dragging it into the Wayland dicsussion when we’ve already addressed it?

                                                                                                                                      1. 7


                                                                                                                                        Wayland prohibits your desktop applications from capturing keystrokes or recording other apps’ screens by default

                                                                                                                                        And this

                                                                                                                                        Wayland prohibits Wayland clients from using Wayland to snoop on other Wayland clients.

                                                                                                                                        Are two very different statements. The latter partially specifies the method of snooping, the former does not.

                                                                                                                                        Why do you keep dragging it into the Wayland dicsussion when we’ve already addressed it?

                                                                                                                                        I do not, I merely reply to incorrect claims brought up in support of wayland claiming that it solves a problem that it does not. It might one day become part of a solution to that problem. It might not. It certainly doesn’t solve it by itself, and it isn’t even part of a solution to that problem today.

                                                                                                                                3. 4

                                                                                                                                  X’s design has many flaws, but those flaws are well known and documented, and workarounds and extensions exist to cover a wide range of use cases. Wayland may have a better design regarding modern requirements, but has a hard time catching up with all the work that was invested into making X11 work for everyone over the last decades.

                                                                                                                                  1. 3

                                                                                                                                    X’s design has many flaws, but those flaws are well known and documented, and workarounds and extensions exist to cover a wide range of use cases.

                                                                                                                                    Once mere flaws become security issues it’s a different matter though.

                                                                                                                                    [Wayland] has a hard time catching up with all the work that was invested into making X11 work for everyone over the last decades.

                                                                                                                                    This may be true now, but Wayland is maturing as we speak. New tools are being developed, and there isn’t much missing in the realm of protocol extensions to cover the existing most-wanted X features. I see Wayland surpassing X in the next two, three years.

                                                                                                                                    1. 2

                                                                                                                                      Yeah, I started to use sway on my private laptop and am really happy with it. Everything works flawlessly, in particular connecting an external HiDPI display and setting different scaling factors (which does not work in X). However, for work I need to be able to share my screen in video calls occasionally and record screencasts with OBS, so I’m still using X there.

                                                                                                                                  2. 4

                                                                                                                                    I wonder if X’s security model being “outdated” is partly due to the inexorable slide away from user control. If all your programs are downloaded from a free repo that you trust, you don’t need to isolate every application as if it’s out to get you. Spotify and Zoom on the other hand are out to get you, so a higher level of isolation makes sense, but I would still prefer this to be the exception rather than the rule.

                                                                                                                                    In practice 99.9% of malicious code that is run on our systems is done via the web browser, which has already solved this problem, albeit imperfectly, and only after causing it in the first place.

                                                                                                                                    1. 4

                                                                                                                                      If all your programs are downloaded from a free repo that you trust, you don’t need to isolate every application as if it’s out to get you

                                                                                                                                      I completely agree, as long as all of my programs are completely isolated from the network and any other source of untrusted data, or are formally verified. Otherwise, I have to assume that they contain bugs that an attacker could exploit and I want to limit the damage that they can do. There is no difference between a malicious application and a benign application that is exploited by a malicious actor.

                                                                                                                                      1. 1

                                                                                                                                        all of your programs are completely isolated from the network?

                                                                                                                                        how are you posting here?

                                                                                                                                        1. 2

                                                                                                                                          They’re not, that’s my point and that’s why I’m happy that my browser runs sandboxed. Just because I trust my browser doesn’t mean that I trust everyone who might be able to compromise it.

                                                                                                                                          1. 1

                                                                                                                                            that makes sense for a browser, which is both designed to run malicious code and too complex to have any confidence in its security. but like i said i would prefer cases like this to be the exception. if the rest of your programs are relatively simple and well-tested, isolation may not be worth the complexity and risk of vulnerabilities it introduces. especially if the idea that your programs are securely sandboxed leads you to install less trustworthy programs (as appears to be the trend with desktop linux).

                                                                                                                                            1. 2

                                                                                                                                              Okay, what applications do you run that never consume input from untrusted sources (i.e. do not connect to the network or open files that might come from another application)?

                                                                                                                                              1. 1

                                                                                                                                                I don’t think you are looking at this right. The isolation mechanism can’t be 100% guaranteed free of bugs any more than an application can. Your rhetorical question is pretty far from what I thought we were discussing so maybe you could rephrase your argument.

                                                                                                                                    2. 1

                                                                                                                                      This argument seems similar to what happened with cinnamon-screensaver a few weeks ago:

                                                                                                                                      https://github.com/linuxmint/cinnamon-screensaver/issues/354#issuecomment-762261555 (responding to https://www.jwz.org/blog/2021/01/i-told-you-so-2021-edition/)

                                                                                                                                      It’s a good thing for security (and maybe for users in the long term once they work again) that these usecases are broken, but it is not a good thing for users in the short term that these usecases don’t work on Wayland.