1. 26

    I just can’t take this serious. It already starts with the linked website for firefox and it’s privacy, which is already questionable. Complaining about portal detection ? Malicious Website Detection ? Trying to prevent abusive Plugins? Giving People the possibility to use Netflix & co ? (You may also say “yeah no, use windows+chrome for netflix, can’t have nice things because of people screaming against anything DRM in firefox”) Oh and of course the old “plugin API change is bad” argument coupled with “they’re evil for doing that”. And complaining about mozilla taking money from google for their search bar (which you can change btw), while giving literally no realistic replacement.

    I can also start searching the evil in literally everything while following a zero tolerance policy regarding anything I don’t like..

    Oh and about them not caring about your configuration opinions: Have you tried getting the Developers of any other free software to change it the way you want ? Did you go to the gnome people and told them that you dislike their new apple-ish bar and you want an options to disable it, make it look more like windows ? Did you go to anyone’s software and told them “I don’t like this, you have to change this and maintain this or you’re evil” ?

    The only valid claim for me is that firefox/mozilla does not give you every possible configuration option you may want while pushing forward their ideas, which may be totally ignorant and/or wrong. And that they may be taking their “david vs goliath” stance to pressure you into using firefox.(I really liked this idea/view.) But otherwise I just can’t think of anything else than grumpy ol’ grandpa with his shotgun going “you dare to take a step on my land”.

    1. 15

      I absolutely agree. XUL add-ons were unsustainable and had to be removed. It also is not wrong for Mozilla to do what they think is best for users. Preferences have costs and Mozilla can’t make everything under the sun configurable. If you want to disagree with them then that’s fine, but the idea that they can’t take a stance on what they think is the best user experience and ship only that is silly. This is my least favorite meme in free software: “the maintainer said no to the feature I wanted so they’re evil.” (Or alternatively: “we have to add every feature that any user requests, ever.”)

      See also: Is Linux About Choice?

      1. 1

        I don’t think that everything should be configurable, just that subsequent versions should make less things less configurable. The problem is that Firefox looks poor when compared to itself (or rather older versions).

        1. 1

          This would sound way better if they weren’t losing users each and every month …

        2. 4

          To elaborate a little bit more: I strongly agree that some of mozillas decisions are very questionable and that some of firefox additions are pretty uncalled for. I think the voices criticizing firefox are valid. But this article just feels like a negative rant that gives no room for other opinions*, while promoting chrome of all things and saying everything is lost. Well then please just stay silent ?! You won’t get anywhere by verbally shitting on other peoples work with very questionable arguments. (Who don’t owe you anything by the way.)

          *I LIKE the new URL bar, I don’t care at all about the plugin change. And yes, I was a heavy user of things like “tabmixplus” and whatnot. Still I just can’t see the benefit anymore. Things change, people change.

          1. 6

            Did you go to the gnome people and told them that you dislike their new apple-ish bar and you want an options to disable it, make it look more like windows?

            The power differential is completely different:

            Gnome people don’t care about having users, while Mozilla is constantly guilt-tripping people to use them.

            1. 13

              Gnome people don’t care about having users….

              This is actually not the case and not a claim to make off-handed.

              1. 3

                I interpreted it as “Gnome people won’t sacrifice their principles to get more users” which … dunno if that was the intent, but it makes a lot more sense.

                1. 7

                  I disagree strongly: I think the ascribed principles (or at least their prioritization) to Mozilla are wrong and both Gnome and Mozilla act quite strongly in the same direction. They both got their mind on the end user. Through that lens, Gnome is even often struggling, e.g. if you check the state of their software distribution. It’s not fair to play out those to orgs against each other and certainly not something they want to see themselves in.

                  1. 2

                    Last I checked the Gnome Web browser doesn’t support DRM.

                    My understanding is that the big difference between these orgs is that Mozilla consists of primarily paid staff all under a single reporting hierarchy. Gnome has a mix of paid and unpaid contributors, but most of the paid contributors are not being told what to do by the people who are paying their salary. It shouldn’t be surprising that these different organizational structures result in different kinds of decisions being made.

              2. 2

                Fair enough

            1. 24

              It’s really expensive and time-consuming to implement modern web standards, which are growing even more from day to day. Mozilla is not all-powerful or exempt from this rule just because it’s non-profit, it still has to pay developers and follow certain corporate “rules”.

              I will continue recommending Firefox, and if we want to see Mozilla improve its stance, we must make it more independent by supporting it financially so it doesn’t have to rely on corporate sponsors that much, and we have to push more against unnecessary web standards.

              It’s despicable to see people ditch Firefox and switching to Chrome and derivatives because of some “puristic” reasons against Mozilla, while Alphabet Inc. is magnitudes worse. The large crowd relies on us tech-affine people to give them good recommendations, so please, recommend Firefox, because it makes a difference at scale.

              Everything is better than a web that is more or less 100% controlled by Alphabet Inc.

              1. 3

                I will continue recommending Firefox, and if we want to see Mozilla improve its stance, we must make it more independent by supporting it financially so it doesn’t have to rely on corporate sponsors that much, and we have to push more against unnecessary web standards.

                Me too, but I won’t donate to them as long as they are Google’s lap dog and pay the CEO millions. See also https://blog.mozilla.org/blog/2020/10/20/mozilla-reaction-to-u-s-v-google/. This makes me very sad.

                1. 2

                  And Mozilla won’t become way less dependent on default search engine deals unless people start donating a lot en masse. Vicious cycle!

                  1. 2

                    Unfortunately true! But I don’t trust them right now, at all. I don’t want to support them for blindly implementing Google’s ideas of what the web should be like. The user needs to be put first, not Google, nor Mozilla’s CEO.

                    They could organize a crowdfunding, for example, with a list of explicit promises of what they’ll do with the money next to showing Google, Cloudflare and maybe Mullvad (Mozilla VPN) the door. This would help us (people who care) fight surveillance capitalism and have a “last stand” for the “free web” and again recommend our friends and family to use Firefox and tell them: “If a site doesn’t work in Firefox, you are being screwed over, try to use a different site instead”. I really don’t mind paying $100/yr or even more for a browser/company that has my back and don’t even mind indirectly paying for interesting “distractions” like Rust and perhaps good ideas like Servo. I’m sure I am not the only one!

                    But yeah…maybe it is already too late.

                    1.  

                      I really don’t mind paying $100/yr or even more for a browser/company that has my back and don’t even mind indirectly paying for interesting “distractions” like Rust and perhaps good ideas like Servo. I’m sure I am not the only one!

                      I would be interested in that too, but there are probably not enough of us. Perhaps the situation would be better with some kind of an incentive program, but with free software, I don’t know what “advantages” you could hold back for people who pay more?

                      1.  

                        It’d probably have to be more ideological.

                        Publish the company’s/foundation’s monthly burn rate, and the income streams, and ask for the money required to restart MDN or Servo or get rid of the Google ties or whatever.

                        I don’t think this Patreon-style system would allow for funding specific projects because that would give power to people who don’t know what to do with it.

                        Strawman troll example could be all the donations going to keeping the old add-on system alive, and the invisible second-order money pit being fixing its security vulnerabilities.

                        But Mozilla would still have to make some promises and deliver on them or at least some donations would surely cease.

                      2.  

                        Same for me.

                        I’d love to pay for Firefox, but giving them money currently would appear as if I supported their current strategy/management.

                        1.  

                          I’d love to pay for Firefox, but giving them money currently would appear as if I supported their current strategy/management

                          I’d be onboard with something crowdfunded, and I think that would definitely not be their current strategy, thus it would be supportable.

                  2. 2

                    I agree that it would be better, but I don’t believe it’s sustainable. My main message is that people shouldn’t make themselves use Firefox, just the the sake of saving a standardized web. I by no means am trying to tell people to use Chrome!

                    1. 7

                      What exactly are you telling people to use then? What alternatives are there besides Firefox and Blink-based browsers? Or are you just saying, use whatever you like best, because it doesn’t matter?

                      1. 3

                        I’m not telling anyone to use anything, just that you don’t have to use Firefox even if you don’t agree with Google’s position. As I say in the last sentence:

                        I implore everyone to think about this, and make a conscious decision. Continue using Firefox if you want to, but don’t fool yourself that it means anything.

                  1. 8

                    In a parallel dimension where courts are more accessible, our hero sued the company for every dime they made using the unlicensed software.

                    1. 6

                      Or alternately, perhaps our hero knew about the Principles of Community-Oriented GPL Enforcement and decided not to go to court first and not to seek the absolute maximum monetary damages.

                      1. 9

                        Incidentally, the Software Freedom Conservancy recently announced that they are changing enforcement strategies to prioritize litigation.

                        From https://sfconservancy.org/copyleft-compliance/enforcement-strategy.html#the-need-for-litigation:

                        In our private negotiations, pursuant to our Principles of Community-Oriented GPL Enforcement, GPL violators stall, avoid, delay and generally refuse to comply with the GPL. Their disdain for the rights of their customers is often palpable. Their attitude is almost universal: if you think we’re really violating the GPL, then go ahead and sue us. Otherwise, you’re our lowest priority.

                        1. 7

                          The principles are designed to get more compliance. In this case they got compliance, so that’s good. But there is some disagreement about the best strategy to get max global compliance.

                          1. 6

                            Nah, take the company down without hesitation or remorse.

                            It’s not like the company wouldn’t do the same if it was in their financial interest.

                            1. 3

                              I’m familiar with this document, but I’ve not reviewed it in a few years. Thanks for linking to it!

                              It’s my understanding that avoiding court at first is generally the normal course of action preceding litigation.

                              GPLv3’s termination provision allows first-time violators automatic restoration of distribution rights when they correct the violation promptly

                              In theory, OP could consider the violation remedied under this provision of the GPL3 (upon which the AGPL3 is based, IIRC, with the notable SaaS provision added). That halts future infringement but doesn’t address past infringement. It’s on OP to determine if there’s enough juice to be squeezed out to make the effort worth it.

                              Copyright holders (or their designated agent) therefore are reasonable to request compensation for the cost of their time providing the compliance education that accompanies any constructive enforcement action.

                              This is one of my favorite parts of this community-oriented enforcement mindset. However, a few hours of consulting time versus 100% of the profits of a service that made a company tens or hundreds of thousands of dollars, minus legal fees of probably 1/3… do the latter and donate the proceeds to the SFC or another great open source organization. I believe that’d do more for the community.

                          1. 5

                            Great article!

                            Given the page table overhead of allocating large amounts of virtual memory, does anyone know if people actually use virtual memory to implement sparse arrays?

                            Virtual memory is fascinating. We take it for granted, but it had to be invented at some point. And, there were several precursors to page-based virtual memory, too. Perhaps we’ll move away from virtual memory?

                            That’s the premise of this article: “The Cost of Software-Based Memory Management Without Virtual Memory”

                            “While exact area and power consumption are difficult to quantify, we expect that removing (or simplifying) support for address translation can have a net positive impact since current translation infrastructure uses as much space as an L1 cache and up to 15% of a chip’s energy”

                            “Modern, performance-critical software is considered non-functional when swapping, so it is avoided at all cost.” We learn about swapping as one of the primary reasons for using virtual memory in college, but then in the real world it’s essentially not used at all.

                            1. 6

                              I don’t think swap being unused is necessarily true - it depends on your use case. Swap is fantastic when, for example, you have a bunch of daemons running in the background, and you want them to be running, but you rarely need them. In that case those programs can be paged out to swap and that memory can be used for something better, like the disk cache. The boost you get from the disk cache far outstrips the hit you take by swapping the daemon back in, because you don’t have to perform the latter operation very often.

                              I think really the issue is predictability. On my laptop I have a big swap partition to enable the above effect, but in production I don’t because it makes the system easier to understand. IIRC I even go as far as to disable overcommit in production because, again, having it on makes the system less predictable and therefore less reliable. If something gets OOM killed on my laptop it’s annoying; if something gets OOM killed in prod, something just went down.

                              This fundamental trade-off between predictability/complexity and optimization comes up in other places too. For example: free space under ext4 is trivial to understand. But under ZFS or btrfs? Incredibly complicated, especially when it comes to deletion (i.e. “if I delete this, how much space will I actually get back”). You can delete a snapshot with 1TB of data in it and end up freeing <10 MB because the snapshot was taken only a minute ago. How much space a 5 MB file will take depends on how well its contents compress. Under btrfs this is even affected by where in the filesystem you write the file, because different parts of the filesystem can have different levels of redundancy. And blocks might be deduplicated, too. There is a separation and disconnection between the logical filesystem that userspace sees and the physical disk media that the filesystem driver sees that simply didn’t exist in e.g. ext4. And this can potentially cause big problems because tools like df(1) examine the logical filesystem and expect that that’s equivalent to the physical filesystem.

                              1. 2

                                This is awesome. Thanks for writing all this!

                                1. 2

                                  Sure thing :-) I’m glad you enjoyed it. I also found your original article fascinating. Sometimes I wonder about posting long things like this because a) I wonder if I’m getting way too detailed/if I just like talking too much and b) I have a lot of experience as a technologist, but almost exclusively as a hobbyist (as opposed to in industry) - “production” for me is mostly a single server running in my house that I administer by hand. I always wonder if I have some glaring blind spot that’s going to make me say something silly. So it’s super nice to see that at least some other people thought it made sense :D

                                  As a side note to just tack onto the end of my original comment: all of the ZFS/btrfs examples I listed above are about things that you actually can calculate/understand if you know where to look, but I just thought of an example where (AFAIK) that’s not the case: under ZFS at least, how do you answer the question, “if I delete both these snapshots, how much space will I free?” If both snapshots are identical, but you deleted a 1GB file since they were both taken, zfs(8) will report that deleting either snapshot will free 0 bytes. And yet, deleting both will free 1GB. This is fundamentally harder to present in UI because instead of “if you perform x, y will happen”, it is “if you perform x, y will happen; if you perform a, b will happen, but if you perform x and a, then y, b, and some third effect will happen”. Unix CLIs really like outputting tabular data, where rows are things (disks, datasets, etc.) and columns are properties about that thing. But given that kind of tabular format, it is virtually impossible to usefully express the kind of combinatorial effect I’m describing here, especially because given x operations that could be performed (e.g. destroying a particular snapshot), listing the combinatorial effects would require outputting ℙ(x) rows.

                                  Again, this is all AFAIK. (If someone knows how to answer this question, please correct me because I actually have this problem myself and would like to know which snapshots to delete! Maybe it can be done with channel programs? I don’t know much about them.)

                                  1. 2

                                    It’s a fascinating UI problem you bring up wrt combinatorial options. Perhaps an explorable UI with a tree of options would make sense. Maybe there needs to be a tool just for the purpose of freeing up space that calculates the best options for you.

                                    1. 2

                                      A tree UI is actually a phenomenal idea. Whenever I tried to think of a solution to this problem before, the best I could come up with was usually a program that would let you simulate different filesystem operations (perhaps it would actually run the real operations in-kernel, but in a “fake” ZFS txg that was marked to never be committed to disk?) and then interrogate the results. You’d be guessing and checking at the different combinations, but at least you’d actually get a solid answer.

                                      The problem with having a tool to calculate the “best” way is that what’s “best” is incredibly subjective. Maybe I have a few snapshots taking up a large amount of space, but those backups are incredibly valuable to me so I’d rather free up smaller amounts of space by destroying many more smaller but less important snapshots. I really do think a tree UI would work well though, especially if it had a good filtering system - that would help alleviate the power set explosion problem.

                                2. 2

                                  You’re absolutely correct that swapping has a role to play. I just don’t think it’s used much in high-performance, low-latency systems - like web servers. Also: the disk cache is my hero <3.

                                  The tradeoff between optimization and predictability is a good point. Another example I see in my work is how much more complexity caching adds to the system. Now, you have to deal with staleness and another point of failure. There’s even a more subtle issue with positive and negative caching. If someone sends too many uncached requests, you can overload your database, no matter your caching setup.

                                  1. 2

                                    Yeah, this is a great point. I think the predictability problem in both our examples is closely related to capacity planning. Ideally you’re capacity planning for the worst case scenario - loads of uncacheable requests, or lots of undeduplicatable/incompressible/etc. data to store. But if you can handle that worst case scenario, why bother optimizing at all?

                                    I think really all this optimization is not really buying you additional performance or additional storage space, which is how we often think about it. It’s buying you the ability to gamble on not hitting that worst case scenario. The reward for gambling is “free” perf/storage wins… but you’re still gambling. So it’s not actually free because you’re paying for it in risk.

                                    Side note: we love the disk cache! <3

                                3. 5

                                  Thank you!

                                  does anyone know if people actually use virtual memory to implement sparse arrays?

                                  I just searched “virtual memory sparse array” and found this real life example of this being useful with numpy: https://stackoverflow.com/a/51763775/1790085

                                  That’s the premise of this article

                                  Sounds like a really interesting paper!

                                  We learn about swapping as one of the primary reasons for using virtual memory in college, but then in the real world it’s essentially not used at all.

                                  I’ll point out that the other, possibly more important feature of virtual memory is memory permissions. The efficiency benefits of physical memory sharing are also significant (the canonical example is sharing libc between every process.) Virtual memory is also important for implementing copy-on-write semantics; sometimes the kernel needs to write to userspace memory and relies on a page fault to tell if this is a CoW page.

                                  I’ll have to see if the paper talks about replacing these.

                                  1. 2

                                    They propose hardware support for memory permissions - extra metadata for each memory location, with the hardware doing checks before allowing access. They argue that this can still be a net win given how much chip space is given to virtual memory infrastructure.

                                    It should be possible to share physical memory without virtual memory, no? A process simply gets permission to read shared code segments, like libc code. Of course, it would need it’s own copy of libc global state. There might need to be an extra bit of indirection from libc code to the global state if the physical addresses of the state may vary across programs (they could be in the same place under virtual memory).

                                    Since they argue swapping could be implemented at the application layer, perhaps copy-on-write could be too?

                                  2. 3

                                    people actually use virtual memory to implement sparse arrays

                                    I think sometimes but not often. If you only use virtual memory for your sparse array, you don’t know which entries are set and which aren’t. Without knowing that, you can’t skip reading and multiplying the unset entries.

                                    Iirc Linux has a flag for turning off checks in mmap() that normally prevent you allocating silly numbers of pages in one go. The description of it says it’s there for some scientific programs that wanted unlimited overcommit.

                                    The memory overhead of potentially having an entire page allocated for a single entry, when the entries are spread out, may be unwelcome. On the other hand sometimes people work with sparse matrices that have large contiguous dense chunks embedded in a colossal sea of zeroes.

                                    e.g. scipy has several completely separate representations available for sparse matrices https://docs.scipy.org/doc/scipy/reference/sparse.html

                                    1. 2

                                      Dense chunks in an ocean of zeros would be a good case for using virtual memory for sparse arrays. I hadn’t thought of that.

                                      1. 3

                                        FWIW I don’t mean to imply that virtual memory is necessarily a great way to implement that, just that the “entire page for one number” thing doesn’t bite you so hard when your matrices look like that.

                                        I think you’re still likely to benefit from a representation where you write down where each sense block is + a dense array holding all the entries.

                                  1. 7

                                    QUIC being hard to parse by router hardware is a feature, not a bug. IIRC (and I may not) this is why encryption was originally introduced in the protocol. I believe that it wasn’t until TLS 1.3 started maturing that it was integrated into QUIC to also provide strong security guarantees, but to be honest I’m really unsure on this point and I’m too lazy to Google at the moment. Maybe someone else can tell us?

                                    In any case, the reason QUIC being hard to parse by routers is a feature is because it ensures protocol agility. I don’t know the details but there are things that in theory could be done to improve TCP’s performance, but in practice cannot because routers and other middleboxes parse the TCP and then break because they’re not expecting the tweaked protocol. QUIC’s encryption ensures that middleboxes are largely unable to do this, so the protocol can continue evolving into the future.

                                    1. 2

                                      Google QUIC used a custom crypto and security protocol. IETF QUIC always used TLS 1.3.

                                      1. 2

                                        While there are definitive benfits to it, like improved security from avoiding all attacks that modify packet metadata, it also means you can’t easily implement “sticky sessions” for example, i.e. keeping the client connected to the same server on the whole connection duration. So yeah, it’s always a convinience/security tradeoff isn’t it…

                                        1. 2

                                          I am not really a QUIC expert but I don’t really understand the issue here. The Connection ID is in the public header, so what prevents a load balancer from implementing sticky sessions?

                                          1. 2

                                            Oh I’m far from an expert too. You’re right, if the router understands QUIC it will be able to route sticky sessions. If it only understands UDP (as is the case with all currently deployed routers) - it won’t be able to, since the source port and even IP can change within a single session. But that’s a “real-world” limitation, not the limitation of the protocol, actually.

                                            1. 5

                                              What kind of router are you thinking of?

                                              A home router that can’t route back UDP to the google chrome application is just going to force google to downgrade to TCP.

                                              A BGP-peered router has no need to deal with sticky sessions: They don’t even understand UDP.

                                              A load balancer for the QUIC server is going to understand QUIC.

                                              A corporate router that wants to filter traffic is just going to block QUIC, mandate trust of a corporate CA and force downgrade to HTTPS. They don’t really give two shits about making things better for Google, and users don’t care about QUIC (i.e. it offers no benefits to users) so they’re not going to complain about not having it.

                                              1. 2

                                                You should take a look at QUIC’s preferred address transport parameter (which is similar to MPTCP’s option). This allows the client to stick with a particular server without the load balancing being QUIC aware.

                                        1. 7

                                          I can’t put my finger on what I dislike about this story. What’s AWS supposed to do, mail him a cheque? Take him out for dinner?

                                          If (hypothetically) AWS deployed one of my open source projects right now I’m pretty sure I’d put it in the README and in my resume and feel like I got what I wanted out of it.

                                          1. 5

                                            Its not about what is legally required, but what is just nice and collaborative behavior. Nobody is forced by law to be nice to each other, but generally people are. If somebody is a dick there is no law preventing them from being one, but there are still a dick.

                                            1. 3

                                              Personally I think it’d be both sensible (nobody knows the code better) and polite (what better way to say thankyou) to offer the author a job working on the ‘as a service’ team.

                                              1. 3

                                                They’re supposed to credit him, at minimum. It’s in the article.

                                                There’s definitely a discussion to be had about what AWS could or should have done beyond that - for example, whether they have should paid him in some way is definitely a gray area since it’s open source and offered free of charge, and the author knew that. But they couldn’t even put the original author’s name anywhere in the product UI? Even if that wasn’t intentional, it’s a symptom of a larger company culture problem inside AWS that causes them to have a relationship with open source that is at times collaborative and what we would generally define as good open source citizenship (as Andi Gutmans points out) and is at other times frankly parasitic.

                                                Gutmans appears to be asserting that AWS contributes a lot to various upstreams and is a good open source citizen (probably true) and that therefore they couldn’t possibly be copying/stealing from different open source upstreams (probably false). This argument is a complete non sequitor. I mean, am I reading his argument incorrectly or unfairly? Someone correct me if so.

                                                1. 2

                                                  They could easily pay author some kind of royalty per customer and both sides would be better off for it.

                                                2. 2

                                                  About the only thing that most permissive licenses don’t allow you to do is claim you wrote someone else’s software. In a lot of locales, there’s a notion of ‘moral rights’ in copyright law (it’s per-state in the US and is a complete mess). In the copyright page for a book published in the UK, for example, you will find something like ‘the moral right of the author to be associated with this work has not been infringed’. This is something that’s been baked into copyright law for a long time: the understanding that even if someone else has bought or otherwise acquired the rights to profit from your work, you still retain the right to be credited.

                                                  Amazon has fulfilled the letter of their obligation here by including something in the notices file but they’ve done so in the absolute minimal way possible. It wouldn’t have cost them any more to be a bit more public and say ‘this product is built on top of X, thanks to the author for their great work!’.

                                                  Imagine if you’d written a book and the publisher put someone else’s name on the cover and yours in tiny text on the copyright page. It’s exactly equivalent from a legal and moral perspective: both are fine legally, but doing either without explicit approval from the author makes you a dick.

                                                  1. 1

                                                    Imagine if you’d written a book and the publisher put someone else’s name on the cover and yours in tiny text on the copyright page.

                                                    I don’t buy it. The more appropriate analogy is if I put my book on a public book-sharing website, with a license that allowed people to do whatever they wanted with my book, and then complained when people did whatever they wanted with my book.

                                                  2. 2

                                                    Quoting the post:

                                                    He said he hadn’t given the license for Headless Recorder a lot of thought because it’s just a browser extension full of client-side code

                                                    I guess this is the issue. If it upsets you so much that a company uses your code with no upstream collaboration, you should actually give more thought to which license you choose for your project. There are licenses out there that prevent exactly this type of situation.

                                                    1. 1

                                                      The cloud protection license is a great development. But yes the article seems to be about people who’re grumpy they didn’t think of it earlier. Amazon isn’t really known to be an ethical company, you can’t expect them to treat you better than any of their employees out of politeness.

                                                      1. 1

                                                        If you’re making money out of someone else’s work, even if it is in public domain, with even no license at all, buy them a gift. It didn’t even need to be Amazon (the company). Members of the very team that built the service could have done so.

                                                        Basic politeness and courtesy.

                                                      1. 28

                                                        Unix was never as simple as we’d like to remember – or pretend – that it was. Plenty of gotchas have always been lurking around the corners.

                                                        For instance, newlines are totally legit in filenames. So in the absence of insane names, ls |foo will write filenames with one name per line to foo’s stdin. Usually it’s fine to treat ls output as a series of newline-separated filenames, because by convention, nobody creates filenames with newlines in them. But for robustness and security we have things like the -0 argument to xargs and cpio, and the -print0 argument to find.

                                                        For a system that is based on passing around data as text, the textual input and output formats of programs are often ill-suited to machine parsing. Examples of unspecified or underspecified text formats are not difficult to find. I’m really glad to see some venerable tools sprouting --json flags in recent years, and I hope the trend continues.

                                                        1. 5

                                                          Anything but JSON. If plain text is being used because of its readability, JSON is largely antithetical to that purpose.

                                                          1. 14

                                                            JSON fits a nice sweet spot where both humans and machines can both read and edit it with only moderate amounts of anguish. As far as I can tell there is not a good general-purpose replacement for JSON.

                                                            1. 8

                                                              a long article promoting JSON with less than a full sentence for S expression

                                                              1. 4

                                                                What? It’s marked to-do. Here, I’ll just do it. Check the page again.

                                                              2. 3

                                                                What about Dhall?

                                                                1. 2

                                                                  You might consider including EDN, I think it makes some interesting choices.

                                                                  Another point: the statement that JSON doesn’t support integers falls into a weird gray area. Technically it’s not specified what it supports (https://tools.ietf.org/html/rfc8259#section-6). If you’re assuming the data gets mangled by a JS system, you’re limited to integers representable by doubles, but that’s a danger point for any data format.

                                                                  1. 1

                                                                    I actually like this quite a bit, thanks!

                                                                  2. 1
                                                                    1. 7

                                                                      Looks fine, but it’s binary and schema-defined, so it makes very different design tradeoffs than JSON does. It’s not an alternative to JSON, it’s an alternative to protobuf, cap’n proto or flatbuffers, or maybe CBOR or msgpack. There’s a plethora of basically-okay binary transfer formats these days, probably because they prevent people from arguing as much about syntax.

                                                                      1. 4

                                                                        I won’t go into details about where, but at work we have used stateless tokens for the longest time. For us, it’s been a terrible design decision and we’re finally moving off it. Why? Decryption is CPU bound, so it doesn’t scale nearly as well as memory lookups, which is what stateful tokens represent. Moreover a lot of our decryption libraries do not seem to be particularly consistent (high variance if we assume that the distribution is somewhat normal) in their timing. This poses a problem for optimizing the tail end of our latency. At small to medium scales stateless tokens are fine, but as we took on higher scale it just didn’t work. Memory lookups are fast, consistent, and scale well.

                                                                      2. 1

                                                                        You should post this as an article! A few comments:

                                                                      3. 3

                                                                        Anything but JSON.

                                                                        Careful what you wish for…

                                                                        1. 1

                                                                          FreeBSD has had libXo for a while: https://wiki.freebsd.org/LibXo

                                                                        2. 4

                                                                          You can also legitimately give a file a name that starts with a dash, making it challenging to access or delete unless you know the trick.

                                                                          1. 3

                                                                            I remember reading a book on UNIX back in the day (1994? around then) which talked about this issue. The given solution in this professional tome was to cd up and then delete the whole directory.

                                                                            (Asking how to handle this problem was also a common question in interviews back in the day, maybe still today I don’t know.)

                                                                            1. 4

                                                                              That’s… Wrong, at best. rm ./-rf always worked, even when the tool is buggy and doesn’t support -- argument parsing termination.

                                                                              1. 3

                                                                                The man page for (GNU coreutils) rm now mentions both methods prominently. I believe you’ll get a prompt if you try it interactively in bash too.

                                                                                1. 6

                                                                                  Yeah but kids these days don’t read man, they google, or at best, serverfault.

                                                                                  </oldmanyellsatcloud>

                                                                                  1. 14

                                                                                    No wonder they google. Have you tried reading a man page without knowing Linux inside and out? They all pretty much suck. Take the tar man-page for example. It says it’s a “short description” of tar, while being over 1000 lines long, but it fails to include ANY examples of how to actually use the tool. There’s examples on how to use different option styles (traditional and short options), a loooong list of flags and what they do in excruciating detail, a list of “usages” that don’t explain what they do and what return values tar can give.

                                                                                    I mean, imagine you need to unpack a tar.gz file, but you have never used tar before and you are somewhat new to Linux in general, but you have learned about the man command and heard you need to use tar to unzip a file (not a given really) so you dutifully write man tar in your terminal and start reading. The first line you are met with looks like this:

                                                                                    tar {A|c|d|r|t|u|x}[GnSkUWOmpsMBiajJzZhPlRvwo] [ARG…]

                                                                                    Great. This command has more flags than the UN headquarters. You look at it for a couple seconds and realise you have no idea what any of the switches mean, so you scroll a bit down:

                                                                                    tar -c [-f ARCHIVE] [OPTIONS] [FILE…]

                                                                                    Cool. This does something with an archive and a file (Wouldn’t it be helpful if it had a short description of what it does right there?). What it does is a mystery as it doesn’t say. You still have to scroll down to figure out what -c means. After scrolling for 100 lines you get to the part that lists out all the options and find -c. It means that it creates an archive. Cool. Not what we want, but now that we are here maybe we can find an option that tells us how to unpack an archive?

                                                                                    -x, –extract, –get

                                                                                    Sweet! We just found the most common usage at line 171! Now we scroll up to the top and find this usage example:

                                                                                    tar -x [-f ARCHIVE] [OPTIONS] [MEMBER…]

                                                                                    The fuck is a MEMBER? It’s in brackets, so maybe that means it’s optional? Let’s try it and see what happens. You write tar -x -f sample.tar.gz in your terminal, and hey presto! It works! Didn’t take us more than 10 minutes reading the man page and trying to understand what it means.

                                                                                    Or, if you understand how to use modern tools like Google to figure out how to do things, you write the query “unzip tar.gz file linux” into Google and the information box at the top says this:

                                                                                    For tar.gz. To unpack a tar.gz file, you can use the tar command from the shell. Here’s an example: tar -xzf rebol.tar.gz.

                                                                                    You try it out, and what do you know? It works! Took us about 10 seconds.

                                                                                    It’s no wonder that people search for solutions instead. The man files were obviously not written for user consumption (maybe for experienced sysadmins or Linux developers). In addition, this entire example assumes you know that tar can be used to extract files to begin with. If you don’t know that, then you are shit out of luck even before you open the man file. Google is your only option, and considering the experience of reading man files, no surprise people keep using Google instead of trying to read the “short description” that is the size of the fucking Silmarillion!

                                                                                    /rant

                                                                                    1. 4

                                                                                      I don’t disagree with the general sentiment here, but I think you’ve found a man page that is unusually bad. Here’s some excerpts from some random ubuntu box.

                                                                                       

                                                                                      it fails to include ANY examples of how to actually use the tool.

                                                                                      EXAMPLES
                                                                                           Create archive.tar from files foo and bar.
                                                                                                 tar -cf archive.tar foo bar
                                                                                           List all files in archive.tar verbosely.
                                                                                                 tar -tvf archive.tar
                                                                                           Extract all files from archive.tar.
                                                                                                 tar -xf archive.tar
                                                                                      

                                                                                       

                                                                                      Cool. This does something with an archive and a file (Wouldn’t it be helpful if it had a short description of what it does right there?).

                                                                                      Mine has, comfortably within the first screenful:

                                                                                        -c, --create
                                                                                              create a new archive
                                                                                      

                                                                                       

                                                                                      Not what we want, but now that we are here maybe we can find an option that tells us how to unpack an archive?

                                                                                      Something like 20 lines below that:

                                                                                        -x, --extract, --get
                                                                                              extract files from an archive
                                                                                      

                                                                                       

                                                                                      Anyway, I don’t think man pages are intended to be good tutorials in the general case; they’re reference materials for people who already have an idea of what they’re doing. Presumably beginners were expected to learn the broad strokes through tutorials, lectures, introductory texts etc.

                                                                                      I think that split is about right for people who are or aspire to be professional sysadmins, and likely anyone else who types shell commands on a daily basis—learning one’s tools in depth pays dividends, in my experience—but if it’s the wrong approach for other groups of people, well, different learning resources can coexist. There’s no need to bash one for not being the other.

                                                                                      1. 2

                                                                                        This is a GNU-ism, you’re supposed to read the Info book: https://www.gnu.org/software/tar/manual/tar.html

                                                                                        But that also lacks a section detailing the most common invocations.

                                                                                        OpenBSD does it better: https://man.openbsd.org/tar

                                                                                        Of course, on the 2 Debian-based systems I have access to, info pages aren’t even installed… you just get the man page when you invoke info tar.

                                                                                        1. 1

                                                                                          I was just going to bring up info. I believe in many cases manpages for GNU tools are actually written by downstream distributors. For example Debian Policy says every binary should have a manpage, so packagers have to write them to comply with policy. Still more GNU manpages have notes somewhere in them that say “this manpage might be out of date cause we barely maintain it; check the info documentation.” Really irritating. Honestly I never learned how to use info because man is Good Enough™. I mean, come on. Why must GNU reinvent everything?

                                                                            2. 1

                                                                              I don’t think the author has to deny this, the difficulty of teaching doesn’t have to be the same as using. The difficulty in using, complicates the system, that then make it harder to teach – for example because of --json flags.

                                                                            1. 6

                                                                              Funnily enough, I always thought that the because VS Code gained so many users, so quickly, was the reason it was destined not to last long. My understanding is that everything is supposed to work, with minimal configuration, just install this or that “plug in” and it’s ready. That means, that users don’t have an invested interest in, so it’s easier to switch away.

                                                                              If anything, I think 1. VSC’s state as the reference implementation for LSP, 2. propitiatory Microsoft extentions will keep it alive. But not for decades, the technology it is based on would make that unprobable, and corporate doesn’t guarantee extended support when it’s free – just think of how many Google services were killed.

                                                                              1. 4

                                                                                People writing plugins for it is the investment they are making. And the plugin ecosystem is huge in VS Code, so the collective investment made from users is also huge. I think that VS Code is here to stay for a while.

                                                                                1. 9

                                                                                  Probably the best thing VSCode has give all of us is LSP; this makes it a lot easier to build language-specific plugins for $any editor (and the entire model of having an external process taking care of all of that stuff is much nicer too, IMO). This alone ties people less to VSCode (or any other editor/IDE) than a few years ago, since “intelllisense”-like features will work equally well anywhere (given good support for LSP, of course).

                                                                                  1. 7

                                                                                    Is this really the case? We used to have a supremely customizable browser that was leapfrogged by a way less customizable but faster browser within a few years to the point where the more customizable browser threw out its customizability and adopted the plugin model of its competitor.

                                                                                    1. 5

                                                                                      You do give a good point. But at the time Firefox ditched XUL it already had problems with the extension community fizzling out. Also to note, the initial Chrome market growth was mainly people switching to it from IE. Anecdotally, Firefox users mainly stuck with what they used until the XUL removal.

                                                                                      1. 3

                                                                                        That’s true. While not a plugin developer even as a user I remember the frequent issues that plugins would not work in new Firefox versions for a while until they got updated, at which point it again only worked until the next release. I can see as a plugin developer you’d do that once, twice, but if you need to keep up every 6 weeks that’s gonna take a toll.

                                                                                        Yet somehow I think that just throwing out extensibility out the window has made it a worse browser (“Mozilla Chromium”) and if not for my contrarian attitude there is very little to keep me in Firefox-land. Recent firings at Mozilla also have not cast a good light on the long-term viability of the browser.

                                                                                    2. 5

                                                                                      It’s an investment for the plugin-creators, but not for the people using them. My understanding is that it’s just click-to-install and go. That level of investment can easily be replaced by whatever other editor, that provides the same effort.

                                                                                      1. 5

                                                                                        You can invest just as much time configuring VS Code and the extensions you install as you can in vim if you really want to. That you can just click a button to install instead of manually downloading a plugin or cloning a repo to your .vim folder or adding some lines to your .vimrc or whatever the process is for installing a package on emacs is nothing but a welcome improvement.

                                                                                        I agree that it’s good to be wary of Microsoft, but for now this is probably the best open source editor available out there and I’m using it as long as it lasts.

                                                                                        1. 3

                                                                                          Sure, it’s configurable and programmable, but is that usually done? My point was just that, for since most people just install a plug-in, and let it be, these will be the same that could easily move back and forth between other editors with similar capabilities.

                                                                                          1. 2

                                                                                            The fact that there are specific plugins is important. Companies are starting to create plugins specific for their product for VS Code, e.g. Bitbucket integration or AWS Toolkit. These extensions aren’t that replaceable, so the move between editors gets more complicated. In other words, you could say that some users invest into VS Code by getting used to using specific extensions.

                                                                                            1. 1

                                                                                              But these extensions will get recreated for the next editor/IDE du jour since Amazon would of course go where people are, there is no inherent connection why AWS Toolkit would require VS Code. I remember the hype of creating IDEs based on Eclipse, but most of them had their lunch eaten by Jetbrains IDEs and VS Code these days.

                                                                                              1. 1

                                                                                                If those editors are able to get popular without all the big extensions that everybody relied on having been ported yet.

                                                                                                1. 1

                                                                                                  These big extensions only got created after VS Code got popular; it would make little sense for companies to invest into a tiny niche editor and its extensions.

                                                                                                  1. 1

                                                                                                    Right, exactly. I think you’re missing my point - you’re saying that the big extensions will move to the next popular editor, but I’m saying that (what would be) the next popular editor might not have a chance to get popular in the first place because it was missing the big extensions people expect. It’s a chicken-and-egg problem.

                                                                                              2. 1

                                                                                                So it’s not only propitiatory Microsoft extensions, that are not allowed to be ported to other editors, but also other platforms as well, but third-party service providers as well.

                                                                                                So in other words: Microsoft VS Code is on it’s way to become the Windows of text editors. Should anyone be surprised?

                                                                                            2. 1

                                                                                              The process for installing a package in Emacs is clicking a button. In this screenshot it looks like a hyperlink that says [Install]

                                                                                        2. 1

                                                                                          But not for decades, the technology it is based on would make that unprobable

                                                                                          I’d love to hear more elaboration on this. I’m assuming it means because it’s built on the web languages, but from my perspective, if any language is indicating longevity at this time it would be the web languages?

                                                                                          1. 1

                                                                                            Yes, from my experience, web technologies and preferred practices change a lot, and quickly. I don’t see that as a reliable foundation. Other than that, Electron is based on Google’s Chromium, and with their increasing monopoly over the web, what they will be doing is also a great uncertainty. I’m not guaranteeing anything, just stating my guesses.

                                                                                            1. 2

                                                                                              Got it, yeah saying preferred practices change a lot in the web world is an understatement. Web technologies have elements of stability (old websites from the 1990s still just work), and elements of instability (how you’re supposed to make a website changes almost everyday). Point blank my view on this is it doesn’t really matter, technology choices are really a problem that only affects resource-strapped projects. E.g., platform momentum can be leveraged to solve technology problems, but no amount of technology improvements will ever create platform momentum. In other words, if the shifting sands of web technology are ever a problem for VS Code, it will be because VS Code is already dead for some other reason.

                                                                                              (Which also means if you want to still be able to use VS Code, even if it’s no longer popular, then technology choice is important. Personally I think VS Code’s value is inextricable tied to its momentum, so that’s not me.)

                                                                                        1. 10

                                                                                          I submitted this because this is the second time in the week I’ve seen other posts recommending moving the sshd listening port to an unprivileged port and I think this is always a terrible idea.

                                                                                          1. 43

                                                                                            Now, back to SSH: when we start SSH on port 22, we know for a fact that this is done by root or a root-process since no other user could possibly open that port. But what happens when we move SSH to port 2222? This port can be opened without a privileged account, which means I can write a simple script that listens to port 2222 and mimics SSH in order to capture your passwords. And this can easily be done with simple tools commonly available on every linux system/server. So running SSH on a non-privileged port makes it potentially LESS secure, not MORE. You have no way of knowing if you are talking to the real SSH server or not. This reason, and this reason alone makes it that you should NEVER EVER use a non-privileged port for running your SSH server.

                                                                                            The author is suggesting trusting port 22 because it is a “root” process.There is a “way of knowing if you are talking to the real SSH server or not”, and it’s actually one of SSHs features since its first release. I would trust any port, no matter the “privilege level” required to listen on that port, for a single reason: I trust the SSH server based on its fingerprint, not on its listening port; and I know that my server’s key data is only readable by root, it has been like this in almost all SSH default installations for the last 20 years.

                                                                                            Now, let’s pretend you STILL want to move the port away because you get so many attacks on your SSH port. First of all: are you able to logon as root? If so, fix that now. Secondly: are you using passwords? If so, fix that now and change into public key authentication.

                                                                                            I want to move the port away because of the insane amount of traffic that I have to pay for (if I rent a server, VPS, or anything similar which bills me on network egress/ingress). Disabling password access (for any user) will not make dumb port scans and SSH fingerprinters stop looking at my SSH banner and then decide, based on this information, to just try out username/password combinations, even when my server rejects this authentication method.

                                                                                            The rest of the arguments are personal opinion.

                                                                                            1. 8

                                                                                              Besides, by this reasoning creating a connection to the many many services that run on port >1024 is a bad idea too. Connect to MySQL on 3306? Oh noes! Have your app run on localhost:8080 and a proxy on *:80? Oh noes!

                                                                                              1. 3

                                                                                                Please move your MySQL port to 306 and launch MySQL as root.

                                                                                                1. 1

                                                                                                  call me crazy but I don’t think “you risk an attacker accessing your database” and “you risk an attacker having a shell to do whatever they want” are really equivalent.

                                                                                                  1. 1

                                                                                                    Well, the DB in most cases have much more value to the attacker than your machine, so I would say, that from the pragmatic viewpoint, DB is more likely to be targeted.

                                                                                                2. 8

                                                                                                  the insane amount of traffic that I have to pay for

                                                                                                  how much money per month do you estimate you were paying for to handle traffic from people attempting to ssh into a given node?

                                                                                                  1. 3

                                                                                                    About 2 euro cents a month, per host.

                                                                                                    1. 1

                                                                                                      the question is: how many resources of concurrent connections does this take, which are completely unnecessary and are filling your logs

                                                                                                      1. 3

                                                                                                        Clearly not enough to make log tuning worthwhile.

                                                                                                        A lot of these blanket statements ignore the fact that action or inaction is perfectly reasonable dependent on threat model. But of course, most people making blanket statements aren’t applying a threat model when doing so.

                                                                                                    2. 6

                                                                                                      This was basically what I was going to say.

                                                                                                      If a server can somehow knock down sshd, listen on the same unrestricted port, they still would have to present the appropriate hostkeys.

                                                                                                      Even then, LSM’s like SELinux, etc can put restrictions on who can name_bind on any port you want. only caveat is that you have to write the policy for it. I am strongly against the >1024 privileged ports restriction in the era of LSMs.

                                                                                                      1. 1

                                                                                                        I am strongly against the >1024 privileged ports restriction in the era of LSMs.

                                                                                                        Can you expand?

                                                                                                        1. 1

                                                                                                          With LSM you can disable opening any port by all applications and then allow opening ports per application. So on server it allows for much greater security, as you can directly list which application will be able to open connections (and even make it so no port requires super user, as application/user combo will be handled by LSM).

                                                                                                          1. 1

                                                                                                            This is an argument for LSM-based port binding policies, not against the <1024 requires root policy. Unless the two are mutually exclusive?

                                                                                                            1. 1

                                                                                                              Not exclusive, but even with LSM allowing the usage of port <1024 you still need to run given program as root. So all you gain is more complexity instead of simplification

                                                                                                      2. 2

                                                                                                        I trust the SSH server based on its fingerprint

                                                                                                        I very rarely know the fingerprint of a server before connecting to it.

                                                                                                        For my most commonly used hosts, I can look it up with a little bit of work (sourcehut, github, gitlab) but of those, only github made it easy to find and verify. For a lot of hosts in a corporate cloud though, the instances are torn down and replaced so often that host-based keys are essentially meaningless.

                                                                                                        1. 7

                                                                                                          If you’re not verifying host keys, you’re basically trusting the network - but you don’t, otherwise you could use telnet instead of ssh.

                                                                                                          Maybe look into SSH host key signing, so you just need one public signing key to verify that the host has been provisioned by a trusted entity.

                                                                                                          1. 3

                                                                                                            It is also possible to use ssh with kerberos. Then you know that the server is the correct one. Even without ssh-fingerprints.

                                                                                                          2. 5

                                                                                                            You should really start checking the fingerprints. Ignoring that crucial step is how you get hacked. There are way more attack vectors than you can think of. An attacker could get in, for example through your jobs documentation intranet and modify an ip on a document. Or for example, if a DNS server of yours is compromised. If you use password authentication in these situations, you are essentially let the attacker in all servers you have access to.

                                                                                                            Other comments already pointed out viable solutions. You should adopt one of them or simply start checking the fingerprints. What you are doing is dangerous.

                                                                                                            1. 6

                                                                                                              The “implied trust on first use”-model works well enough for many – though perhaps not all – purposes. It’s the “host fingerprint changed”-warning that provides almost all of the security.

                                                                                                              1. 2

                                                                                                                Most of the security no doubt. Almost all… That is debatable. If something happens once every 1000 would you not care to protect against it because you already provided 99.9% of the security?

                                                                                                                What security is in essence, is accounting for the unlikely yet exploitable cases. You look at that attack vectors as a corner case until it is not a corner case anymore. This is how security threats evolve.

                                                                                                                1. 1

                                                                                                                  The thing is, what is the attack vector here, and how do you really protect from it? In your previous post you mentioned modifying the internal documentation to change the IP; but where do you get the host key? From the same internal documentation? Won’t the attacker be able to change that, too?

                                                                                                                  You can use SSHFP records; but of course an attacker can potentially get access to the DNS too, as you mentioned.

                                                                                                                  The thing is that good distribution of these fingerprints is not a trivial problem if you’re really worried about these kind of attacks. Are they unfeasible? Certainly not, and if you’re working for a bank, CA registrar, or anything else that has high security requirements you should probably think about all of this. But most of us don’t, and the difficulty of pulling all of this off effectively is so high that most of us don’t really need to worry about it.

                                                                                                                  We don’t lock our houses with vault doors; a regular door with a regular lock is a “good enough” trade-off for most cases. If you’re rich you may want to have something stronger, and if you’re a bank you want the best. But that’s not most of us.

                                                                                                                  1. 1

                                                                                                                    The attack vector is making you believe you are initially trusting the host you think you know, but it is in fact another host.

                                                                                                                    But you are right, it you misguide a user into connecting to another host, you could also show him another fingerprint and trick them into believing itnid legit too. Fingerprints are just a huge number usually displayed as an unintelligible string of chars. It’s not like the user recognise them by heart.

                                                                                                                    I do check them if I change computer, or if l connect to a knowm machine I ask a coleage to verify it. But I’ll agree that it.s a trade off and that maybe it.s ok for most people to just trust.

                                                                                                        2. 3

                                                                                                          I think this post and discussion around it is a waste of time. Right now, wasting my time. But I wanted to come here and proclaim in spectrum of terrible ideas, it doesn’t even register. Do you have scale that starts at terrible and then just goes to some k multiple of terrible?

                                                                                                          I moved my ssh port in like 2002 (the year) , and you know what, I no longer had to see 150+ log messages a day about failed logins, it went to zero. Like 1-1. Mission Accomplished.

                                                                                                          Please enumerate all the other terrible ideas I shouldn’t follow, might be a good list.

                                                                                                          edit, btw, I am just poking good terrible fun at you.

                                                                                                        1. 2

                                                                                                          I wonder if putting projects like that on those rent-a-coder sites would get good results? It’s probably a faster way to get things done than a generic contribution to the project in question, but then there’s the question of getting the changes accepted…

                                                                                                          I’m basing this on absolutely nothing, but my guess is that there’s a fairly high likelihood of the solutions proposed by many of the coders on such sites being quick-and-dirty and thus hard to get accepted. But I’m gladly proven wrong, and specifying that the work needs to follow project guidelines and such might go a long way. Making payment depend on actually getting the changes into mainline is probably not fair though, since that might be due to various reasons out of the control of the coder working on it.

                                                                                                          1. 3

                                                                                                            Maybe you could specify that the payment depended either on getting patches into mainline or no response from upstream for two or three weeks or something.

                                                                                                            1. 3

                                                                                                              There might be reasons unrelated to the work done that could get it rejected by mainline, e.g. “it’s not the direction we want to take the project”, and that wouldn’t be fair for the one who took the job. So IMHO the work should be paid for once done, and getting it accepted elsewhere should be the job of the one who ordered it. It gets a bit tricky if the reason for rejection is “needs cleanup to adhere to project standards” and things like that of course.

                                                                                                              And before doing something like this it’s probably a good idea to discuss the matter with the mainline maintainers and offer to pay them to do it, and if they’re not interested in doing the work ask if they’re open to accepting the proposed changes if they’re performed by someone else.

                                                                                                            2. 1

                                                                                                              agreed on that very last part.

                                                                                                              To be honest I have never really personally purchased fixed-term contract work, I know of some “serious” people who take those kinds of contracts but one experience through $JOB lead to a bunch of not-so-great code…

                                                                                                              I think I might experiment with this though, it might be the right price/instantaneousness balance

                                                                                                            1. 2

                                                                                                              I don’t really understand the argument that the PYTHONPATH problem can’t be fixed. Python doesn’t have an absolute stability guarantee, and even in the Python 3.8 release notes I see a removal of (admittedly small) API surface over security concerns. Surely the fact that the vast majority of uses of this behavior are accidental (as the article notes) would justify its removal? Especially because there is a perfectly good alternative - simply use . instead of empty string.

                                                                                                              Am I missing something?

                                                                                                              1. 5

                                                                                                                Many of the listed problems would be solved by simply disabling JavaScript for “the document web”, block remote resources, etc. No need to reinvent everything. We could optionally go back to XHTML 1.0 (or HTML 4 Strict), use object tags for video/audio and call it a day. As for CSS, we’d need to remove its Turing completeness to avoid it becoming “the next JS”. If any functionality is missing, they can be reintroduced as HTML elements after careful consideration. I do like the HTML5 tags like <nav> and <aside> which can be be used by screen readers etc. to focus on the content on of the page.

                                                                                                                We could also keep JS, but put a sleep(0.01) between every JS statement that needs to be executed making it slow on purpose and forcing developers to only write the JS that is really needed to optimize some aspects of the site.

                                                                                                                See also this pyramid from this talk.

                                                                                                                1. 1

                                                                                                                  Why in the world would anyone opt in to that, though?

                                                                                                                  On my (static) website, you can read almost everything but if you let me load JavaScript, then you get some cute toys I wrote and you get to see Webmentions loaded from webmention.io. I’ve mostly written my website “the right way”, but I’m disincentivized from switching to the “document web” because it breaks a few things for no benefit.

                                                                                                                  “Well yes, but you’d do it because you want to be disciplined and support a new kind of experience.” If everyone had this kind of discipline, we wouldn’t have this problem in the first place because everyone would write good semantic HTML and bare-minimum JS.

                                                                                                                  “Sure, but it wouldn’t be opt-in because we’d get all the browsers to enforce these restrictions for the document web, and everyone would be forced to change.” How would you figure out what’s the document web and what’s the application web? In particular, without pages being able to lie about being on the application web?

                                                                                                                  1. 1

                                                                                                                    I’d propose the distinction between “application web” and “document web” is this: if you need JS (at all) it is the “application web”. If a site works without requiring any JS, it is part of the “document web”, even if it has optional JS to tweak some little UI/UX things otherwise not available/possible.

                                                                                                                    Webmentions can also be used without any JS in the browser, at least back when I implemented support for it. That has the benefit of increasing the performance for your users and does not require them to leak the sites they visit to a (centralized?) service.

                                                                                                                    It requires a little bit of discipline yes, and not everything is possible. This is something everyone has to decide for themselves obviously. For me figuring all this stuff out is part of the fun! I avoid centralized systems, especially “big tech” or surveillance capitalists, try to protect my users and make it as fast as possible. I want to be part of the decentralized “document web”…

                                                                                                                1. 12

                                                                                                                  It’s good to see people asking the right questions, but unfortunately this post seems to be mostly wrong answers.

                                                                                                                  I think this situation is a lot like the Scheme programming language around the release of r6rs; everyone was dissatisfied with how big it got, and in the next rev it got split into r7rs-small which addressed the needs of people who wanted something easy to implement and good for teaching, and an r7rs-big superset which was for people who wanted maximal practicality to build cool applications in. A solution like that would be a much better fit than going off to create something that’s gratuitously incompatible.

                                                                                                                  1. 11

                                                                                                                    But we don’t have a hard distinction like “students” to work with here. A browser that looked like a regular browser and worked with some pages, but then failed to work when clicking on a hyperlink, such a browser would run pretty hard into the uncanny valley. The word people would call it is “broken”.

                                                                                                                    There’s no such thing as a little compatible. The only way to stop supporting Turing-complete features in the document web is to own that we’re going our own way, and set expectations appropriately on every single page that the browser visits.

                                                                                                                    1. 6

                                                                                                                      A browser that looked like a regular browser and worked with some pages, but then failed to work when clicking on a hyperlink, such a browser would run pretty hard into the uncanny valley. The word people would call it is “broken”.

                                                                                                                      This is literally the experience I get in my primary browser where I run noscript, and I love it, because good web sites are so fast, but shitty web sites are … still shitty. I have a backup browser I use for shitty web sites and web applications. If more people made web sites that worked as web sites and we saved web applications for things that actually needed to be web applications, it would be even better.

                                                                                                                      1. 4

                                                                                                                        Noscript sounds interesting, and I’m going to check it out. That sounds like something I would like, but that unfortunately does not sound like something everyone would want. Compatibility is mostly binary if people are to accept something as “working”

                                                                                                                      2. 4

                                                                                                                        Well you could simply require a special HTTP header like: X-New-Web: true for anyone to opt in. If the browser doesn’t see that header, then it can just break.

                                                                                                                        A major point is that I would like to publish my content on the new web and the old web. I don’t want to make two different versions of the site.

                                                                                                                        I would change the HTML of http://www.oilshell.org to accomodate some “new web”, if someone drafted a spec and had a minimal browser (similar to SerenityOS or something). As long as it’s more or less a subset of HTML, CSS, JS. I got rid of Google Analytics quite awhite ago, so it should work fine.

                                                                                                                        Making a whole other build toolchain from Markdown is possible, but not really desirable. And my Markdown contains HTML. So those posts would be too much work to convert to the new web.

                                                                                                                        A dirt simple HTML parser is very easy to write. I already wrote one and I’m tempted to do some kind of simple “batch” HTML renderer myself…

                                                                                                                        1. 6

                                                                                                                          And my Markdown contains HTML

                                                                                                                          I really wonder if people calling for a gratuitously incompatible markdown-based “new web” understand that HTML documents are valid markdown…

                                                                                                                          1. 3

                                                                                                                            That works from the writer/publisher’s perspective, but look at it from your viewer’s perspective. If they use newBrowser they can read your site, but you risk a bad experience anytime you link to something from the old web. In practice your viewers will get frustrated using newBrowser.

                                                                                                                            We’ve had 25 years now to support screenreaders and other niche apps, and they show us what happens to niche use cases. The only way to make this use case work is to commit to it entirely. It’ll take a longer time to build an audience, but I think the growth has a better chance of being monotonic, where a more compatible attempt will start from a higher base but also find it difficult to break through some ceiling.

                                                                                                                            Now that I think about it, this disagreement about compatibility also explains our respective projects :)

                                                                                                                            1. 2

                                                                                                                              Well say I want to link to this thread on lobste.rs.

                                                                                                                              The alternative to linking is to write on the new web “open up your old web browser and visit https://lobste.rs/s/tmzyrk/clean_start_for_web”. Is that really better?

                                                                                                                              No matter what, you will have URLs on the new web. Just like there are URLs on billboards.

                                                                                                                              I guess the browser could pre-fetch the links and for X-New-Web: true it could display it in a different color. You would have red links for current web, and green links for new web, etc.


                                                                                                                              Also, it occurs to me that “a new web” was already invented, and in probably the most promising place: the transition from desktop to mobile, mobile devices vastly outnumbering desktop these days:

                                                                                                                              https://en.wikipedia.org/wiki/Wireless_Application_Protocol

                                                                                                                              iPhone could have used it. It was meant for such devices. And making Safari run on a phone wasn’t easy. But they went with the compatible web instead …

                                                                                                                              1. 2

                                                                                                                                Prefetching links is klunky. You can’t link to red sites too often. No matter what you do, trying to change the web in incompatible ways must be accompanied by changes to behavior. The medium has to influence the message. Trying to ignore the representation is a siren. If you want to do that, just stick with the current web.

                                                                                                                                Now, that’s just my opinion, and I may be wrong. Starting from scratch may also not work. But I started this thread responding to the idea of subsetting html. Regardless of whether you agree with my approach, I hope the problems with the subsetting approach are at least equally plausible.

                                                                                                                                1. 3

                                                                                                                                  Yeah at this point it’s theoretical – I don’t think either solution really works… “Replacing the web” or “a clean start for the web” is close to impossible. It will take at least 100 years, i.e. I wouldn’t bet on it in my lifetime.

                                                                                                                                  I would liken it to “replacing Windows”. Well that never happened. OS X became a lot more popular, but it’s still like 5% market share I’d guess. Linux never replaced Windows. Thousands of businesses still run on Windows.

                                                                                                                                  But what did happen is iOS and Android. Windows became less important. So the web isn’t going to be replaced by anything that looks like it. It won’t be replaced by a simple alternative that serves Markdown, or any subset of HTML/CSS/JS.

                                                                                                                                  There will have to be some compelling new functionality for people to adopt and move there. I don’t know what that is, but subsetting alone doesn’t work. (If I had to place my bets, it would be something involving video, but that’s a different discussion)

                                                                                                                                2. 1

                                                                                                                                  The alternative to linking is to write on the new web “open up your old web browser and visit https://lobste.rs/s/tmzyrk/clean_start_for_web”. Is that really better?

                                                                                                                                  I don’t think there’s anything wrong with letting the OS/Browser/Client choose the behavior of handling out-of-band links. I believe that one of the things that led to the huge scope creep of the web was the insistence of handling everything over the web: images, text, videos, real time streams, games, etc. All of that has caused tremendous bloat in the HTTP protocol (everything from ETags to HTTP pipelining) and still makes HTTP a “jack of all trades, master of none” protocol.

                                                                                                                                  I’m also a big fan of purpose-built apps myself, because they offer me control on how I consume content. I often scrape sites or host local proxies to let me browse portions of the web with the experience I prefer. With the current system of the web handling everything, this sort of behavior is difficult and not-encouraged. I’d love to see a world where individuals controlling their content was encouraged.

                                                                                                                                  1. 1

                                                                                                                                    Looks like you edited the comment after my previous response. By no means am I suggested killing all links. There are precedents for groups of pages that can link within the group but not between groups. Links from www to gopher don’t work without special software, so in practice they often behave like separate universes. I think that’s fine, even great.

                                                                                                                                3. 2

                                                                                                                                  I would change the HTML of http://www.oilshell.org to accomodate some “new web”, if someone drafted a spec and had a minimal browser (similar to SerenityOS or something). As long as it’s more or less a subset of HTML, CSS, JS. I got rid of Google Analytics quite awhite ago, so it should work fine.

                                                                                                                                  Gemini (mentioned in the article) has a Markdown-adjacent format (which I informally called “Gemtext”) and has several server and client implementations that exist, and the protocol is simple enough that even clients poorly maintained still are widely compatible with the rest of the ecosystem. It remains to be seen if this continues, but I would love to see Oil Shell’s presence on Gemini!

                                                                                                                                  Making a whole other build toolchain from Markdown is possible, but not really desirable. And my Markdown contains HTML. So those posts would be too much work to convert to the new web.

                                                                                                                                  This is a tough one, but HTML is not really designed for reified document representation. HTML’s “ancestor” SGML fit that role more tightly, but after the WHATWG dispute was resolved, HTML became purely a presentation layer with some vestiges of its SGML past. XHTML was an attempt to make a reified document representation, but due to the aforementioned WHATWG dispute, XHTML was abandoned. Nowadays, HTML is a moving target that is meant to be a language for web presentation and an intermediate language for web applications. Relying on HTML feels a little fraught to me given this status.

                                                                                                                                  This doesn’t really remove the pain of maintaining documents in both a reified form (Markdown or something else) and then having to concretize them into HTML and other formats. In Gemini space, there’s been a lot of talk of the experience (read: pain) of having to unify Gopher, HTML, and Gemini content into Markdown or some other reified format. I’ve wondered idly myself whether XML deserves a refresh, because I see creeping ad-hoc alternatives (Pandoc, Docbook, Jupyter Notebooks, etc) instead of a single reified document format.

                                                                                                                                  1. 4

                                                                                                                                    I don’t understand the last 2 paragraphs. I don’t know what “reified document representation” is. IMO HTML is a perfectly good format for documents. It’s not a great format for writing, but that’s why Markdown exists. However, Markdown needs HTML to express tables, image tags, etc.

                                                                                                                                    I have many scripts that generate HTML, that do not use Markdown, like:

                                                                                                                                    https://www.oilshell.org/release/0.8.pre10/test/spec.wwz/survey/osh.html

                                                                                                                                    https://www.oilshell.org/release/0.8.pre10/benchmarks.wwz/osh-parser/


                                                                                                                                    It is perfectly fine to have a design goal for a simple document format and network protocol. However I would not call that a “new web” (not that you did). It’s something else.

                                                                                                                                    I would make an analogy to shell here. I assume that the user is busy and has other things to do. Shell is a tool to get a job done, which may be a minor part of a larger project.

                                                                                                                                    That is, I don’t think that everyone is going to spend hours poring over shell scripts. I imagine they have some crappy bash script that is useful and they don’t want to rewrite. So Oil is purposely compatible and provides a smooth upgrade path.

                                                                                                                                    https://news.ycombinator.com/item?id=24083764

                                                                                                                                    Likewise, a website is usually a minor part of a larger project. If there’s no upgrade path, and it doesn’t provide unique functionality, then I’m not going to bother to look at it. That’s not to say that other people won’t be attracted to simplicity, or another aesthetic goal.

                                                                                                                                    But in terms of “getting work done” I don’t see the appeal to a new format. Just like I don’t care to rewrite thousands of lines of shell scripts (which I have) in another language just for fun.

                                                                                                                                    1. 1

                                                                                                                                      It’s for that reason I stick with HTML 4.01 for my blog. Yes, there are a few tags in HTML5 that would be nice to use, but HTML5 is a moving target. I’ll stick with HTML 4, thank you very much.

                                                                                                                                      1. 1

                                                                                                                                        This confuses me. HTML5 is still very backwards-compatible. I believe there have been a handful of breaking changes but overall it’s basically additive. The “state of the art” (with SPAs and whatnot) changes, sure, but I mean… you can just ignore that. I do.

                                                                                                                              1. 13

                                                                                                                                I think the best feedback I’ve heard in these situations (Python, Drupal, Angular, Perl) is that you can go a long way by renaming the project something similar like DrupalNext or whatever, and softly forking the project, leaving the current now-legacy community infrastructure in place. You can make it clear you won’t be investing in the legacy system any more, just let whichever interested parts of the community are there carry the legacy project as far as they want.

                                                                                                                                Angular followed this pattern relatively well. There’s a relatively google-friendly split between Angular and AngularJS, all the legacy site and documentation is still there, and there are still occassional maintenance releases.

                                                                                                                                1. 3

                                                                                                                                  Yeah, this is an interesting question. Seems like it’s kinda like the uncanny valley for technologies. That is, if the technology is sufficiently different, people will be okay with it because it’s not a breaking change, it’s simply a different thing.

                                                                                                                                  For example: Perl 6 seems to have avoided this trap by explicitly stating that backwards compatibility was not a goal, this is something new, etc. In fact apparently the language was renamed to Raku last year to reflect this. And sure enough Perl 5 is still (AFAICT) a thriving community.

                                                                                                                                  Contrast that with a Python 3: long, drawn out, with many arguments against it and much dragging of feet. That’s not to say that Python 3 was a bad idea, or not executed well given the constraints (I’m sure there are many arguments that it wasn’t, but that’s beside the point here)… but it does make me wonder.

                                                                                                                                  1. 11

                                                                                                                                    The perl5/6 split killed perl. Python got hurt with the python 3 compatibility changes, but it’s come through in the end. Maybe if perl 6 had been raku from the start, perl 5 wouldn’t have hurt as much.

                                                                                                                                    1. 3

                                                                                                                                      The perl5/6 split killed perl.

                                                                                                                                      Citation needed. The way I see it, the Perl community is broadly divided into sysadmin hackers who want to Get Stuff Done(tm) and language hackers that are fascinated by Perl’s intricacies.

                                                                                                                                      As time passed, other languages surpassed Perl in popularity, including in the domains where Perl was traditionally a strong contender (systems administration, web programming). This is just normal language evolution. Every language is not just the code, it’s also the community, and maybe unfortunately for Perl, both subgroups just loooooved writing code that was close to write-only. This made it easy for other language’s boosters to promote their vision instead.

                                                                                                                                      For example, comparing Python to Perl:

                                                                                                                                      • Perl: there’s more than one way to it! - Python: <endless fretting about whether a feature is “idiomatic”>
                                                                                                                                      • Perl: there’s a module for that! - Python: “batteries included”
                                                                                                                                      • Perl: diffuse bunch of mailing lists drive development - Python: BDFL, numerous PEPs
                                                                                                                                      • Perl: freedom to write code your way (C, Lisp, whatever) - Python: bondage & discipline

                                                                                                                                      Seeing this, the language hackers embarked upon Perl6 (now Raku), and in the tradition of grand promises there was to be a shiny new modern Perl in a couple of years, ready to fulfill both group’s needs and leapfrog the competition. But time passed, Raku got more and more complicated and never really got very fast, and the sysadmins decided to continue to work on Perl 5.

                                                                                                                                      (The recently announced Perl 7 is Perl 5 but with an explicit focus on not retaining absolute backwards compatibility.)

                                                                                                                                      Perl is only “dead” if you view current TIOBE popularity rankings as proof of life. But Perl is getting relatively (and maybe absolutely) less popular - however, I’d argue that Raku was an attempt to reverse that course, not the cause of the slide.

                                                                                                                                      1. 3

                                                                                                                                        I’d argue that Raku was an attempt to reverse that course, not the cause of the slide.

                                                                                                                                        I’d agree if you mean that renaming “Perl 6” to “Raku” was an attempt to reverse that course. But I believe the spectre of perl 6 being so radically different than perl 5 was a great accelerant of that slide. As someone who used perl 4 and then perl 5 for big ambitious new things (including web apps written against mod_perl because cgi was too slow) the announcement of perl 6 put the brakes on my use of perl starting around 2000 or 2001. I still continued to use it as a nicer shell scripting language after that (and still do today) but I resisted starting anything new in perl 5 that I thought would be large and need maintenance around the time they announced the direction of perl 6.

                                                                                                                                        I think that split was even worse than python3 because it took so long to take shape.

                                                                                                                                        1. 2

                                                                                                                                          Thanks for expanding. I’ve only used Perl 5 for personal projects so don’t really have any insight on how Perl 6 impacted larger projects.

                                                                                                                                        2. 2

                                                                                                                                          People stopped writing new code in Perl 5 because it was neither improved nor promoted, and they did not write new code in Perl 6 because it was not ready or usable.

                                                                                                                                          Therefore “The perl5/6 split killed perl”.

                                                                                                                                          Renaming Perl 6 to something else came long after this effect had locked in. Sure, no-one is going around erasing old Perl code off of distro CDs. But very few people are writing new Perl code, or replacing old Perl code with new Perl code. If we’re going to argue the semantics then we have to admit that at the very lest Perl smells funny.

                                                                                                                                          Perl 7 is the right move. It was the right move twenty years ago, it’s the right move now. Hopefully it can be a less painful upgrade than Python 3, which has delivered no benefits to users but created a lot of work for maintainers. Python burnt a lot of goodwill with the upgrade. Perl doesn’t have any to spare after the Perl 6 fiasco.

                                                                                                                                        3. 1

                                                                                                                                          Interesting, okay. I’m not at all a Perl person so I didn’t know this (hence “AFAICT” in my parent comment). Thanks for the correction!

                                                                                                                                          1. 1

                                                                                                                                            Note that Perl5 itself wasn’t compatible with Perl4. Thus, Perl went through at least one “Python3” moment when people had to adjust their code from 4 to 5 and survived that, presumably because required changes were not that big, and the language still had a momentum.

                                                                                                                                            1. 1

                                                                                                                                              Perl 5.000 was released on October 17, 1994

                                                                                                                                              That’s why I don’t buy that comparison. This is not ancient history, but compare the amount of users writing “applications” in Perl 4 in 1994 (deliberately ignoring sysadmins with scripts for now) with the amount of people writing Python 2 code (of all variety, because small scripts were really easy to port from 2 to 3) when Python 3 came out. I have no numbers but I’d be willing to bet it’s orders of magnitude in users, number of projects, and LOC.

                                                                                                                                              1. 1

                                                                                                                                                But wasn’t the Perl language backwards compatible? I.e. you could keep updating to the latest version of Perl 5, but your ancient Perl 4 scripts still ran.

                                                                                                                                                An approach like this really leads to cruft in a language, though.

                                                                                                                                                1. 2

                                                                                                                                                  Not completely compatible, AFAIR. Larry Wall said somewhere that he was happy that the “Black Perl” poem was no longer a valid Perl program, even.

                                                                                                                                          2. 1

                                                                                                                                            I would agree but there is not always enough resources to work on the new and the old version simultaneously. I think it’s what’s happened to Perl but Python has managed to avoid.

                                                                                                                                          1. 4

                                                                                                                                            Do I understand correctly that if you cannot boot due to a failed upgrade and you had a checkpoint you could rollback right from the bootloader instead of reaching for your favorite recovery ISO?

                                                                                                                                            1. 3

                                                                                                                                              I believe so, yes. And because it’s a whole-pool checkpoint, it works for a pretty broad definition of “failed upgrade”.

                                                                                                                                              1. 2

                                                                                                                                                FreeBSD already has ZFS Boot Environments with which you can boot into the earlier saved Boot Environment to perfectly running system. The ZFS Pool Checkpoint is other thing. It is useful when you would literally modify or alter ZFS at the pool level, thing for example ZFS UPGRADE or ZPOOL UPGRADE commands.

                                                                                                                                                The ZFS Boot Environment does not protect you from that but rollback to ZFS Pool Checkpoint will revoke these ZFS UPGRADE or ZPOOL UPGRADE commands like they never happened.

                                                                                                                                                To get more idea about ZFS Boot Environment check this:

                                                                                                                                                More info about ZFS Pool Checkpoints here:

                                                                                                                                              1. 5

                                                                                                                                                For real, we don’t hear “thanks” very often and expressions of gratitude are often our only reward for our work. We do appreciate it :)

                                                                                                                                                I thank maintainers for maintaining the project in almost all bug reports or feature requests I file, and if I use and appreciate their project I specifically mention that. I also thank maintainers for reviewing my PRs: I’ve been on the other side of that, and I’m sometimes awful at reviewing PRs to projects I supposedly “maintain”.

                                                                                                                                                I vaguely recall some advice for newbies somewhere to not thank anyone anywhere in issues and patch discussions and such - the rationale being that if you set the standard that you’re going to thank people, well now it’s rude if you didn’t thank them whereas before it was fine that you didn’t because it’s a highly technical context and it’s expected that you’re not going to waste words on non-technical aspects. I thought it was ESR but maybe not; certainly I just searched (albeit only for ~2 minutes) and couldn’t find the source.

                                                                                                                                                In any case, to this day I still think about this while saying thanks and have to consciously ignore it. I really dislike this advice.

                                                                                                                                                1. 8

                                                                                                                                                  I thank maintainers for maintaining the project in almost all bug reports or feature requests I file

                                                                                                                                                  This can read as “thanks, and I want something from you”, which does feel a bit hollow at times. The best gratitude is delivered unconditionally, and these rare emails and comments always make my day.

                                                                                                                                                  I also thank maintainers for reviewing my PRs: I’ve been on the other side of that, and I’m sometimes awful at reviewing PRs to projects I supposedly “maintain”.

                                                                                                                                                  This is a lot better, I always thank contributors for their patch and I thank maintainers for their review - it’s more personal, they’ve done work for my sake and I’m thankful for their time.

                                                                                                                                                  The idea of not expressing thanks as a rule is a pretty bad idea, though.

                                                                                                                                                  1. 1

                                                                                                                                                    This can read as “thanks, and I want something from you”, which does feel a bit hollow at times.

                                                                                                                                                    Hm, yeah, I’ve wondered a few times if this is how it was coming off. That’s not how I mean it though, it’s just that by filing the bug I’m already getting in touch with the project so it’s a good time to say it.

                                                                                                                                                    Is there a better way to do this so it doesn’t seem conditional or trite?

                                                                                                                                                  2. 5

                                                                                                                                                    I thought I’d seen him say something to that effect too, but it looks like we were mistaken.

                                                                                                                                                  1. 2

                                                                                                                                                    If anyone is looking for a decent UPS for home use, I can highly recommend the CyberPower CP1500PFCLCD. I have two, one powering my networking equipment and a small server, and another powering my workstation. I’ve had a handful of outages, and they worked wonderfully. I checked after reading this article, and this model is line-interactive.

                                                                                                                                                    1. 1

                                                                                                                                                      How long have you had them? Have you observed any degradation during that time?

                                                                                                                                                      1. 2

                                                                                                                                                        I’m only coming up on a year, but I haven’t noticed any major degradation going by the estimated run time that the unit reports.

                                                                                                                                                    1. 4

                                                                                                                                                      The example given is quite extreme. Overall, I think verbosity isn’t necessarily a bad thing (look at vim for example).

                                                                                                                                                      Also, if you’re into comments - just give an example of the input and output of such a Regular Expression. It would do most of the job, and if the reader knows just a bit of regex, they could figure it out, either by themselves or by using tools like regexr and regex101.

                                                                                                                                                      1. 1

                                                                                                                                                        regex101

                                                                                                                                                        +1 for regex101, their test functions and also storage for your tests, so you can embed that as comment

                                                                                                                                                        1. 1

                                                                                                                                                          That might be a bad idea if you want your code to last longer than Regex101. Probably okay if it’s purely supplemental to your actual comment, though.

                                                                                                                                                          1. 1

                                                                                                                                                            true, but it’s just additional, with an easy way to verify & create it

                                                                                                                                                      1. 1

                                                                                                                                                        An authenticated, local attacker could modify the contents of the GRUB2 configuration file to execute arbitrary code that bypasses signature verification.

                                                                                                                                                        If the attacker can do this, they can also overwrite the whole bootloader with something that bypasses signature verification. If you can do this, your system is already compromised.

                                                                                                                                                        1. 2

                                                                                                                                                          No they can’t. Or rather they can, but if Secure Boot is on the UEFI firmware will refuse to load the modified grub.efi image, so the system won’t boot.

                                                                                                                                                          1. 1

                                                                                                                                                            So, this vulnerability allows jailbreaking, but does not affect security against attacker without a root password?

                                                                                                                                                            1. 3

                                                                                                                                                              How about an attacker armed with a local root privilege escalation vulnerability aiming to achieve persistence?

                                                                                                                                                              1. 1

                                                                                                                                                                https://xkcd.com/1200/

                                                                                                                                                                To do what? They already have root for plenty of persistence. I mean, yeah, they can penetrate deeper. They can also exploit a FS vulnerability or infect HDD or some other peripheral.

                                                                                                                                                                But that’s just not economical. In most cases, it’s the data they are after. Either to exfiltrate then or to encrypt them.

                                                                                                                                                                In other cases they are after some industrial controller. Do you seriously believe there to be anyone competent enough to catch the attack but stupid enough not to wipe the whole drive?

                                                                                                                                                                The only thing I can imagine TEE being used for is device lockdown.

                                                                                                                                                              2. 1

                                                                                                                                                                Not sure what you mean by jailbreaking - can you clarify? We’re talking about a laptop/desktop/server context, not mobile. Secure Boot does not necessarily imply that the system is locked down like Apple devices are. See Restricted Boot.

                                                                                                                                                                If the attacker cannot write to /boot, then they can’t exploit this vulnerability. If the attacker has physical access this probably doesn’t hold true, regardless of whether they have a root password. If the attacker is local and has a root password or a privilege escalation exploit then this also doesn’t hold true, and can be used to persistently infect the system at a much deeper level.