Threads for voutilad

  1. 2

    If multiple solutions have similar security according to the threat model, pick the simpler one to reduce the overall attack surface.

    Not just attack surface, but probability of misconfiguring or inadvertently neutering the solution by introducing a regression.

    1. 36

      The core problem is the only entities currently paying for web browser development have mixed motives. The EU should just buy out Mozilla and make Firefox into the browser for the people instead of waiting around for Google to stop breaking their laws.

      1. 9

        What’s to buy? It’s open source. They can contribute to it or fork it if Mozilla Corp doesn’t like their changes.

        1. 21

          The Mozilla organization, including the expertise necessary to develop and maintain Firefox. It would probably cost more to build an independent organization capable of doing the same thing.

          1. 3

            Which Mozilla organization? The non-profit Mozilla Foundation or the for-profit Mozilla Corporation?

            1. 7

              I’m not sure, what do you think?

              1. 5

                The Mozilla Corporation is owned in its entirety by the Mozilla Foundation. Even if somehow the Foundation were convinced to sell the Corporation, the Foundation is the one that owns the key intellectual property and is the actual steward of the things people think of as “Mozilla”. The Corporation’s purpose is to be an entity that pays taxes and thus can have types of revenue and business deals that are forbidden to a non-profit.

                1. 1

                  The employees who work on Firefox and everything that encompasses work for the Corporation. It has more of a purpose than “taxes”.

                  1. 3

                    I am a former employee of the Mozilla Corporation, so I am aware of what the MoCo employees do.

                    1. 1

                      MoCo gets all of the revenue that’s generated by Firefox and employs most of the developers. All but one of the members of the Firefox Technical Leadership team work for Mozilla Corp - the one that doesn’t did until relatively recently: https://wiki.mozilla.org/Modules/Firefox_Technical_Leadership

                      While the Foundation technically owns the IP the Corporation controls the direction of the product and collects all of the revenue generated by the work of both their employees and contributions from the community.

            2. 9

              Declare Firefox a public infrastructure and fund Mozilla or another entity to upkeep and enhance that infrastructure.

            3. 11

              No thanks, I’ve had enough cookie popups for one day.

              1. 55

                The GDPR is specific about cookie banners not being obtrusive, and that rejecting tracking is as easy as accepting.

                The only compliant banner I regularly see is from gov.uk, and I find it doesn’t annoy me at all.

                The popups are as obnoxious as possible to make us hate the GDPR. Can’t we oppose the tracking instead of the law telling us when it’s happening?

                1. 8

                  And of course the core thing is you don’t need the cookie popups if you’re not doing random tracking of people!

                  Every cookie popup is an announcement that the site has some automated trackers set up. If you are just using cookies for things like handling sessions you do not need the cookies.

                  1. 8

                    Absolutely. The options are either make your tracking opt-in through genuinely informed consent, or don’t track at all.

                    Companies found the secret third option, which is just ignore the law and dark pattern your users into agreeing to anything.

                    Banners say things like “we need cookies for this site to work” and pretend they need your permission to use them. Ironically they only need permission for the cookies that aren’t essential to make the site work.

                    Hiding things away under “legitimate interest” makes things even more confusing. Are the other things illegitimate interests?

                    1. 2

                      Can someone explain to me what “legitimate interest” actually means?

                    2. 2

                      …you do not need the cookies.

                      Do you mean the cookies or the popups? I’m not familiar with how the GDPR treats non-cookie based things like JWT in local storage and sent with every request.

                      1. 2

                        The same. You require consent to store any data on user computer. However it do not require some “essential” cookies - for example cookie with preferences for dark/light theme do not require consent if it is direct action on website, cookie containing session ID do not require consent, etc. That applies for local cookies only though.

                  2. 11

                    Same. I really wish companies would stop choosing to add them to their websites.

                    1. 4

                      If you already block tracking by any mean, you can get rid of those banners using something like https://addons.mozilla.org/en-GB/firefox/addon/i-dont-care-about-cookies/.

                      1. 3

                        Yeah, the EU’s heart was in the right place, but implementation has been a disaster. It’s like passing a law that murder is okay as long you say “I am going to murder you” as you take out the knife.

                        1. 27

                          What the EU did was basically passing a law that makes murder illegal. Companies/Murderers just ignore it and go around saying “anyone that doesn’t want to be murdered please answer by saying your name within of the next millisecond. Guess no one answered, so you’ve just consented to murder!”

                          GDPR explicitly bans all the annoying dark patterns of cookie banners. A GDPR-compliant cookie banner would ask you once whether you consent to tracking. It’d have one huge no button (but no easily accessible yes button). If you ever click no, it’d have to remember as long as possible and close itself immediately. If you click yes, you’d have to go through a handful of options to specifically choose which tracking methods to allow.

                          1. 10

                            So, basically the polar opposite of many cookie popups today, which have a big “I ACCEPT” button and a “More options” button that you have to click to manually turn off all tracking…

                          2. 3

                            Except large Internet companies are much more powerful and accountable to public pressure than murderers, so they should face at least as much public scorn as the lawmakers.

                            1. 2

                              There’s a saying, that road to hell is paved with good intentions.

                              That often means that if someone’s is not sure how to help, then proceeding with helping can create more problems than resolve anything.

                              1. 2

                                That’s better than having no law against murder. Then we can move away from all the people saying “I am going to murder you.”

                              2. 2

                                Umm… we’ve just today decided to instruct Matomo not to use cookies rather then implement cookie banner for our new Wagtail-based websites. I think it’s working?

                                1. 1

                                  Cookie popups on websites linked to by Google?

                                1. 12

                                  Nice! Regarding pv(1), I recently noticed that OpenBSD uses ftp(1) to create arbitrary progress bars. For example to get a progress bar when extracting tarballs, you can do:

                                  $ ftp -VmD "Extracting" -o - file:archive.tgz | tar -zxf -
                                  Extracting archive.tgz 100% |*********************| 7580 KB    00:00
                                  

                                  It’s a clever trick that turns ftp(1) into cat(1) with a progress bar. The interface for pv(1) is much nicer, but sometimes it’s nice to use tools that are in the OpenBSD base install.

                                  This technique is also how OpenBSD displays the progress bars during install.

                                  1. 3

                                    I have mixed feeling about this. On one side, it is cool, but on the other side, it looks like ftp is becoming something else. For starters, it is called ftp while it is used for other protocols as well. Now a (cool) trick to make it work as a progress bar. Where’s the next stop? The init system!?! (just joking).

                                    1. 3

                                      …it is called ftp while it is used for other protocols…

                                      Well, the p stands for program, not protocol, these days 😉: http://man.openbsd.org/ftp

                                      The fact it supports locally mounted file systems is news to me! I never knew.

                                  1. 2

                                    Why did ktrace decode this to VIDIOC_S_FMT when it actually wasn’t? This would probably have been a much shorter trip if ktrace wasn’t being actively misleading.

                                    1. 1

                                      If you look at the kdump(1) source, there’s a step where it scans the header files and tries to build a mapping between the ioctl command value and the text name. The challenge is a command value isn’t really a way to uniquely describe the ioctl(2) call since it’s really the combo of device (as fd) and command.

                                      This means it’s best effort to keep the command values globally unique using some macros like _IOWR, _IOW, etc. Sadly we have some collisions to fix. :-)

                                      edit: I think I misinterpreted your question…I still believe there’s probably something going on in kdump(1), which is where the human translation occurs. Have to look.

                                      1. 1

                                        For example, from sys/sys/ioccom.h:

                                        /*
                                         * Ioctl's have the command encoded in the lower word, and the size of
                                         * any in or out parameters in the upper word.  The high 3 bits of the
                                         * upper word are used to encode the in/out status of the parameter.
                                         */
                                        

                                        The underlying logic:

                                        #define _IOC(inout,group,num,len) \
                                                (inout | ((len & IOCPARM_MASK) << 16) | ((group) << 8) | (num))
                                        

                                        So the signed/unsigned interpretation can goof up the “inout” directional part, but leave the identifying group and num parts intact. Have to look at kdump(1) to see how this manages to still be recognized, though.

                                    1. 31

                                      This is cool. The commit message doesn’t really explain the security, so I’ll have a go:

                                      Chroot is not a security tool. If you are in a chroot and run as root, you can mount devfs (on older systems, you can create device nodes) and can then mount the filesystem that contains you and escape trivially. It cannot therefore constrain a root process.

                                      If a chroot is allowed by unprivileged processes then it can be used to launch confused deputy attacks. If a privileged process runs in a chroot then it may make decisions based on the assumption that files in certain locations are only writeable by sufficiently privileged entities on the system and may write to locations that are not where it thinks they are. Auditing everything that runs as root to ensure that this isn’t the case is difficult.

                                      This patch provides a simple solution to both problems by denying the ability to run setuid binaries after the chroot has taken place. If you chroot, you can still run any programs that you could previously run but you can’t elevate privileges. This means that you can’t use chroot to mount a confused deputy attack using a setuid binary (you can’t run a setuid binary) and you don’t have to worry about escapes via root privileges (you can’t acquire root privileges).

                                      In summary, this lets me do all of the things I actually want to do with chroot.

                                      EDIT: It looks as if this is being added as part of the work to improve the Linux compat layer. I’m not sure when this was actually merged in Linux, but the patches were proposed in March, so it’s pretty recent. All of the discussion I see about it in Linux is pretty negative, which is a shame because it looks as if this feature was originally proposed for Linux in 2012 and it’s really useful.

                                      1. 2

                                        In summary, this lets me do all of the things I actually want to do with chroot.

                                        What uses do you have in mind? The commits and review comments don’t seem to share any particular use cases.

                                        1. 16

                                          The main one is run a tool in an environment where it can’t touch the rest of my filesystem. This isn’t particularly useful for security (it still has full network access, including the loopback adaptor) but it’s great for testing programs if you want to make sure that they’re not going to accidentally trample over things. It’s also great for a bunch of things like reproduceable builds, where you want to make sure that the build doesn’t accidentally depend on anything it shouldn’t, and for staging install files (you can create a thing that looks like the install location and run the install there and test it without touching anything on the host system.

                                          Container infrastructure can solve a bunch of these problems, but spinning up a container (even on ZFS with cheap O(1) CoW clones) is far more expensive than a chroot.

                                          1. 1

                                            The obvious one is chroot would be really useful for isolating unprivileged binaries, but you have to let them become root via setuid or something first, then drop to the normal privileges once it’s done chrooting. Seems kinda backwards.

                                            1. 5

                                              That’s already done today in a lot of applications that drop privilege. I mainly ask the use case for un-priviledged users that can’t use chroot(2) or chroot(8) because they can’t elevate privilege.

                                              The build system use case @david_chisnall mentions seems the most logical to me. That seems valuable.

                                              When you start using the phrase “isolating unprivileged binaries” it’s into the realm of defending against something. And to make some software run in a chroot, you often have to start setting the chrooted filesystem with dynamic libraries, etc. It’s a really rough tool for an arbitrary user to just use for running an arbitrary application that doesn’t expect to be chrooted.

                                        1. 5

                                          Very cool. As someone hacking on OpenBSD’s vmm(4)/vmd(8) I look forward to tracking NVMM changes.

                                          1. 40

                                            Graph database author here.

                                            It’s a very interesting question, and you’ve hit the nerve of it. The long and the short of it is that, much like lambda calculus can represent any program, relational algebra can represent pretty much all database queries. The question comes to what you optimize for.

                                            And so, unlike a table-centric view, which has benefits that are much better-known, what happens when you optimize for joins? Because that’s the graph game – deep joins. A graph is one huge set of join tables. So it’s really hard to shard a graph – this is connected to why NoSQL folks are always anti-join. It’s a pain in the rear. Similarly, it’s really easy to write a very expensive graph query, where starting from the other side is much cheaper.

                                            So then we get to the bigger point; in a world where joins are the norm, what the heck are your join tables, ie, schema? And it’s super fluid – which has benefits! – but it’s also very particular. That tends to be the first major hurdle against graph databases: defining your schema/predicates/edge-types and what they mean to you. You’re given a paintbrush and a blank canvas and have to define the world, one edge at a time. And $DEITY help you if you want to share a common schema with others! This is what schema.org is chasing, choosing some bare minimum.

                                            This is followed on by the fact that most of the standards in the graph database world are academic in nature. If I have one regret, it’s trying to follow the W3C with RDF. RDF is fine for import/export but it’s not a great data model. I wanted to standardize. I wanted to do right by the community. But, jeez, it’s just so abstract as to be useless. OWL goes another meta-level and defines properties about properties, and there’s simpler versions of OWL, and there’s RDFS/RDF* which is RDF about RDF and on and on…. it’s super cool that triples alone can represent pretty much anything, but that doesn’t help you much when you’re trying to be efficient or define your schema. Example: There’s a direct connection to the difference between a vector and a linked list – they both represent an ordered set. You can’t do a vector in triples, but you can do a linked list.

                                            I know I’m rambling a little, but now I’ll get to the turn; I still think there’s gold in them hills. The reason it’s not popular is all of the above and more, but it can be really useful! Especially when your problem is graph-shaped! I’ve implemented this a few times, and things like mapping, and social networks, and data networks, and document-origin-tracing – generally anything that would take a lot of joins – turn out swimmingly. Things that look more like tables (my example is always the back of a baseball card) look kind of nuts in the graph world, and things that look like a graph are wild in third normal form.

                                            So I think there’s a time and a place for graph databases. I just think that a combination of the above counter-arguments and the underlying needs are few enough that it’s under-explored and over-politicized. They work great in isolation, ironically enough.

                                            I’m happy to chat more, but that’s my quick take. Right tool, right job. It’s a shame about how that part of database theory has gone.

                                            1. 10

                                              Full disclosure: I work for Neo4j, a graph database vendor.

                                              Very well said.

                                              I’d add that most of the conversation in responses to OP assume “transactional” workloads. Graphs databases for analytic workloads are a whole other topic to explore. Folks should check out Stanford Prof. Jure Leskovec’s research in the space…and a lot of his lectures about graphs for machine learning are online.

                                              1. 2

                                                The long and the short of it is that, much like lambda calculus can represent any program, relational algebra can represent pretty much all database queries.

                                                When faced with an unknown data problem. I always choose RDBMS. It is a known quantity. I suspect I’d choose differently if I understand graph dbs better.

                                                I would love to see more articles here on practical use for graph dbs. In particular, I’d love to know if they are best deployed as the primary datastore for data or maybe just for the subset of data that your interested in query (e.g., perhaps just the products table in an ecommerce app).

                                                this is connected to why NoSQL folks are always anti-join. It’s a pain in the rear.

                                                Interesting. People use NoSQL a lot. They simply do joins in the application. Maybe that’s the practical solution when it comes to graph dbs? Then again, the point of graph solutions is generally to search for connections (joins). I’d love to hear more on this aspect.

                                                Thank you and the OP. I wish I can upvote this more. :)

                                                1. 1

                                                  Yeah, you’re entirely right that the joins happen in the application as a result. The reason they’re a pain is that they represent a coordination point — a sort of for-all versus for-each. Think of how you’d do a join in a traditional MapReduce setting; it requires a shuffle! That’s not a coincidence. A lot of the CALM stuff from Cal in ~2011 is related here and def. worth a read. That’s what I meant by a pain. It’s also why it’s really hard to shard a graph.

                                                  I think there’s something to having a graph be a secondary, problem-space-only engine, at least for OLTP. But again, lack of well-known engines, guides, schema, etc — it’d be lovely to have more resources and folks to explore various architectures further.

                                                2. 2

                                                  You’re given a paintbrush and a blank canvas and have to define the world, one edge at a time.

                                                  That’s such a great way to put it :)

                                                  Especially when your problem is graph-shaped!

                                                  I think we need collective experience and training in the industry to recognize problem shapes. We’re often barely able to precisely formulate our problems/requirements in the first place.

                                                  Which database have you authored?

                                                  1. 5

                                                    Cayley. Happy to see it already mentioned, though I handed off maintainership a long while ago.

                                                    (Burnout is real, kids)

                                                  2. 2

                                                    Thanks for Cayley! It’s refreshing to have such a direct and clean implementation of the concept. I too think there’s alot of promise in the area.

                                                    Since you’re here, I was wondering (no obligation!) if you had any ideas around enforcing schemas at the actual database level? As you mentioned, things can grow hairy really quick and once they are in such a state then the exploration to know what needs to be fixed and the requisite migraions are daunting.

                                                    Lately I’ve been playing with an idea for a graph db that is by default a triplestore under the hood. But with a (required!) schema that would look something commutative diagram-y. This would allow for discipline and validation of data, but also allow you to recognize multiple edge hops that are always there so for some things you could move them out of the triplestore into a quad- or 5- store to produce more compact disk representations to yield faster scans with fewer indexes and give the query planner a bit of extra choice. I haven’t thought it through too much, so I might be missing something or it might just not be worth it.

                                                    Anyway, restriction and grokkability of the underlying schema/ontology does seem like the fundamental limiter to me in alot of cases and was curious if as someone who has alot of experience in the area if you had thoughts on how to improve the situation?

                                                    1. 1

                                                      If you don’t mind me joining in, have you heard of https://age.incubator.apache.org/ ? I’m curious to hear your opinion about whether it can be an effective solution to this problem.

                                                      1. 1

                                                        If I have one regret, it’s trying to follow the W3C with RDF. RDF is fine for import/export but it’s not a great data model. […] it’s super cool that triples alone can represent pretty much anything, but that doesn’t help you much when you’re trying to be efficient

                                                        I’ve been using SPARQL a little recently to get things out of Wikidata, and it definitely seems to have pain points around that. I’m not sure at exactly what level the fault lies (SPARQL as a query language, Wikidata’s engine, etc.), but things that seem to me like they should be particularly easy in a graph DB, like “is there a path from ?x and ?y to a common node, and if yes, give me the path?” end up both hard to write and especially hard to write efficiently.

                                                        1. 2

                                                          This goes a bit to the one reply separating graphs-as-analytics and graphs-as-real-time-query-stores.

                                                          SPARQL is the standard (once again, looking at you W3C) but it’s trying to be SQL’s kid brother — and SQL has it’s own baggage IMO — instead of trying to build for the problem space. Say what you will about Cypher, Dude, at least it’s an ethos. Similarly Gremlin, which I liked because it’s easy to embed in other languages. I think there’s something in the spectrum between PathQuery (recently from a Google paper — I remember the early versions of it and some various arguments) and SPARQL that would target writing more functional paths — but that’s another ramble entirely :)

                                                      1. 1

                                                        Company: Neo4j, Inc.

                                                        Company site: https://neo4j.com/

                                                        Position(s): .NET Driver Engineer, Application Security Engineer, Build Infrastructure Engineer, Cloud Security Specialist, DevOps Software Engineer, Director of Engineering, Front-end Engineer, SREs, Software Engineers for various specialized fields (Kafka, Kubernetes, Machine Learning, Dev Tools, IAM, and more.)

                                                        Location: Mostly on-site in London, UK or Malmö, Sweden. Remote available if in a nearby region.

                                                        Description: Neo4j is the graph database leader having pioneered the label property graph model with the goal of helping people make sense of the connections in their data. Neo4j is available in traditional on-premises deployments and now also available via the Neo4j Aura fully managed cloud service. We’re recently closed the largest funding round in database history ($325M) and hiring across all areas of product research and development.

                                                        Tech stack: Java, Scala, Go. Our GraphQL and DevTools team work in JavaScript and TypeScript. Drivers teams work in .NET (C+), Python, JavaScript, Go, and Java. Some positions work in cloud ecosystems (AWS, Azure, GCP) or with platforms like Kubernetes, Apache Spark, Apache Kafka.

                                                        Contact: Apply at https://neo4j.com/careers/

                                                          1. 6

                                                            Thank you! That site is pretty hard to get through. So much animation/transition. It was disorienting.

                                                            1. 2

                                                              Thanks! You can give your feedback here: https://twitter.com/wesley974/status/1411268862184759300?s=20

                                                          1. 5

                                                            The comparison to Go is an interesting one considering that Go 1.16 doesn’t build Go 1.15 by default. Ran into this at work recently. I’m still very new to using Go so I don’t understand all of the details but it was surprising and annoying to encounter.

                                                            Deno 1.9 had a similar issue too. 1.9 & 1.10 changed how plugins work which broke some packages that used plugins.

                                                            Makes me feel like some projects use semantic versioning as a marketing tactic.

                                                            1. 3

                                                              Makes me feel like some projects use semantic versioning as a marketing tactic.

                                                              Fixed it for you :)

                                                              I wonder if someone’s studied the psychology behind version numbers from the user/consumer side.

                                                              1. 4

                                                                If a project (or company) wants to use versioning as a marketing tactic, that’s fine, but don’t make it look or operate like semver. Chrome and Firefox just use increasing integers. Better yet, tie the version number to when it was released so that people can tell how old it is, like Ubuntu and JetBrains.

                                                            1. 5

                                                              While the paper is ~7 years old (I think), the designs are still applicable and the tools have improved on all platforms (e.g. on OpenBSD there’s pledge(2) now and no longer systrace(1)).

                                                              It would be interesting to see a historical review of how privsep has become more wide-spread both server-side and client/user-agent side.

                                                              1. 22

                                                                Full disclaimer: I’ve never written code with epoll directly, only kqueue. Even then, most of my interactions are abstracted by libraries like libevent and the like.

                                                                This article doesn’t make an actual case for claiming kqueue is a “mountain of technical debt.”

                                                                This means that any time you want kqueue to do something new, you have to add a new type of event filter.

                                                                As opposed to adding a new syscall in Linux? If so, I’d rather hear about why one approach is better/worse.

                                                                The conclusion is terse and I’m not sure even correct:

                                                                Hopefully, as you can see, epoll can automatically monitor any kind of kernel resource without having to be modified, due to its composable design, which makes it superior to kqueue from the perspective of having less technical debt.

                                                                epoll magically monitors anything without modification? Maybe…but if you need to add new syscalls, isn’t that just meaning the work is being done elsewhere?

                                                                It feels like there’s a good, valid comparison to be made between some of the capabilities of epoll (e.g. inotify) lacking in kqueue, but these aren’t really dug into.

                                                                1. 15

                                                                  I’m with you. And having used both epoll kqueue, I have to say that this is the first time I’ve ever seen someone hold up epoll as the API without technical debt. This (admittedly slightly incendiary) article, for example, goes into why epoll had to add EPOLLONESHOT and EPOLLEXCLUSIVE flags, and how, even with those, it’s still a very hard API to use correctly. And even there, Linux finally adopting a more IOCP-like API in io_uring seems to validate that even Linux devs found some things lacking with epoll.

                                                                  I’m not saying kqueue doesn’t have flaws, but I don’t think this article makes its point well—and my own experience using the APIs in the real world goes rather the opposite direction.

                                                                  1. 6

                                                                    The main difference is that kqueue event filters are coupled to kqueue, while “magic” file descriptor types are just like any other kernel handle – can be used with select and poll too, can be passed around between processes, and so on.

                                                                    Of course event filters can sort of be converted into file descriptors by creating a kqueue per filter :) but event filters were not designed for fd passing so that doesn’t work, etc. (The reason we added native eventfds to FreeBSD is that the epoll-shim userspace emulation of it was not fd-passable, for example)

                                                                    1. 4

                                                                      The main difference is that kqueue event filters are coupled to kqueue, while “magic” file descriptor types are just like any other kernel handle – can be used with select and poll too, can be passed around between processes, and so on.

                                                                      Although worth noting that while the design allows this, this is a good way to shoot your foot clean off because epoll’s semantics subtly differ from what its API suggests – while it appears to deal in file descriptors, what epoll actually registers and operates on are the underlying kernel objects the file descriptors descript. The negative consequences of this mean that epoll is not really designed for fd passing to work well in most cases any more than kqueue is.

                                                                      It’s incredibly easy to pass around an fd to something that (unknown to you) dup’s it, close said fd when you want to unregister your subscription to its events, and then forever after receive events about the object because while you no longer have a handle pointing at it, the underlying kernel object’s refcount is still > 0.

                                                                      It’s a big enough issue that Illumos’ epoll compatibility call intentionally breaks compatibility in this respect because it’s such a foot-gun https://illumos.org/man/5/epoll

                                                                      While a best effort has been made to mimic the Linux semantics, there are some semantics that are too peculiar or ill-conceived to merit accommodation. In particular, the Linux epoll facility will – by design – continue to generate events for closed file descriptors where/when the underlying file description remains open. For example, if one were to fork(2) and subsequently close an actively epoll’d file descriptor in the parent, any events generated in the child on the implicitly duplicated file descriptor will continue to be delivered to the parent – despite the fact that the parent itself no longer has any notion of the file description! This epoll facility refuses to honor these semantics; closing the EPOLL_CTL_ADD’d file descriptor will always result in no further events being generated for that event description.

                                                                      1. 2

                                                                        the Linux epoll facility will – by design – continue to generate events for closed file descriptors where/when the underlying file description remains open

                                                                        Yeah, that sounds bad.

                                                                        epoll is not really designed for fd passing to work well in most cases any more than kqueue is

                                                                        I’m not saying epoll is. The various something-fd’s are, no matter what you poll them with.

                                                                        1. 1

                                                                          close said fd when you want to unregister your subscription to its events

                                                                          That’s not correct usage of epoll, though. To unregister your subscription to some events, you need to call epoll_ctl(EPOLL_CTL_DEL) before closing the fd. If you do that, everything is fine. Trying to skip doing that and instead just close the file descriptor directly has always been wrong.

                                                                          1. 5

                                                                            That’s true but kind of tangential to the point, which is that epoll only gives the appearance of being an API designed around file descriptors. You have to use EPOLL_CTL_DEL precisely because it actually operates on file descriptions, and the subtleness of that distinction and the misleading nature of the API “being designed around descriptors” is why people don’t remember or necessarily understand that they have to use EPOLL_CTL_DEL – unless and until you dup the descriptor, everything appears to work just fine with just closing it.

                                                                            It’s always been wrong to shoot your own feet with a footgun too, but the design’s still the fundamental problem.

                                                                            1. 3

                                                                              No, the “footgun” of epoll here is no different from other footguns related to duplicating file descriptors, which for better or worse is very powerful but also requires a certain level of careful programming.

                                                                              For example, a novice might assume that they can pass a file descriptor pointing to some file to a child process, which will duplicate the file descriptor, and then in the parent read and write that file independently with their original file descriptor. Of course, this will fail badly, since both the parent and child will update the file pointer stored in the open file description independently - even though the child and parent are operating on different file descriptors.

                                                                              Or a novice might duplicate the write end of a pipe without realizing that that’s a problem, close the original write end, and then wait forever on the read end of the pipe for a HUP that will never come.

                                                                              Powerful features, in traditional C-based Unix-like systems without static analysis and type systems, unfortunately require careful programming. epoll is a powerful feature, and the fact that it (like the rest of Unix/Linux) operates on open file descriptions means that it requires careful programming. The alternative implemented in Illumos is strictly less powerful (and also slower) which means it requires less careful programming. That, alone, doesn’t mean it’s better or worse.

                                                                      2. 4

                                                                        The biggest practical issue is that kqueue requires a file descriptor per file if you’re watching files, which can be problematic for some use cases. This, however, seems like a fixable problem to me and hardly a “mountain of technical debt” (not sure why this hasn’t been done yet actually, so maybe it’s hard and there is a bit of technical debt here).

                                                                        1. 1

                                                                          kqueue requires a file descriptor per directory if you’re watching files

                                                                          Hm. I thought inotify did too.

                                                                          1. 2

                                                                            kqueue is a fd for every file, not directory; I remembered that wrong 😅 You can run in to problems with this using Dropbox, syncthing, and such.

                                                                            1. 1

                                                                              My vague very-likely-incorrect memory was that with both of them you need to watch the directory above the file you care about. Otherwise I thought you didn’t necessarily see events when some other process rename()s a new file over the one you’re watching, because you were watching the inode of the replaced file?

                                                                              But there is a “deleted” event you can watch for with inotify so perhaps that covers it / tells you when the inode’s refcount changes.

                                                                              1. 8

                                                                                inotify is quite a problematic API. It is path based, but a *NIX filesystem is not a tree, it’s a DAG. If you have a file that has two hard links and you use inotify to watch one, then you won’t see modifications through the other. Using kqueue, you’ll see modifications to the file but at the cost of needing one file descriptor (which comes with a small chunk of kernel memory) for every single file in a tree. I think that macOS has a stronger reverse mapping in the VFS layer, which helps their equivalent API.

                                                                                I think you have a choice in designing such an API between accuracy and overhead. XNU’s FSEvents aims to be efficient and give false positives. Kqueue is accurate but (very) high overhead. inotify is efficient but gives false negatives. As a userpace developer, FSEvents is probably closer to what I want: the overhead for scanning a file and determining it hasn’t changed is lower than the overhead of missing an update and needing to rescan everything. For watching config file changes, kqueue is fine because although it’s a high overhead per file, the number if files is small.

                                                                        2. 2

                                                                          As opposed to adding a new syscall in Linux? If so, I’d rather hear about why one approach is better/worse.

                                                                          Adding a new fd type can be self-contained in theory. It’s probably easier to add a new fd type in a kernel module than to add a new kqueue filter type (though it’s worth noting that Linux makes this hard by not exposing a load of the functions for fd-table manipulation to modules). The real problem with both is that they need to fit into a quite constraining existing shape. File descriptors require you to read or write them, when you may want an interface that isn’t stream-oriented. kqueue lets you define a richer vocabulary of verbs, but constrain your data to the shape of struct kevent, so if you want to communicate more data than a handful of words, it’s problematic.

                                                                          You can see these limitations in the integration of signal handling with kqueue and epoll. With kqueue, you can register for signals and you get a count of the number of times the signal has been raised. That’s it, because that’s all that fits into the kevent structure. It’s a nice API if you want a coalescing mechanism for signals, but it’s very limited. I believe it was originally added for AIO callbacks (I might be completely wrong, but I think EVFILT_AIO uses the same mechanism and EVFILT_SIGNAL is just there because most of the plumbing had been done for EVFILT_AIO already, so it was effectively free). In contrast, epoll requires you to use signalfd. This means that you get an epoll notification that there is something ready to read and then you must do multiple read system calls to get the signal info. You get more information from the signalfd approach, but you also need more system calls and you are using a stream-oriented to get fixed-sized messages from the kernel.

                                                                          It’s interesting to consider as a thought experiment what would happen if you tried to add futex / _umtx_op support to both. On Windows, you can use WaitForMultipleObjects to wait for n mutexes and other I/O interfaces, but *NIX systems don’t tend to have that ability. Linux had futexfd in the 2.6 series and removed it because it was inherently racy. Just adding the file descriptor to epoll does not have enough information to register for the event and specify the futex operation and existing value, but if you separate the wait into an operation on the file descriptor and the notification mechanism then it’s hard to get the required atomicity.

                                                                          In contrast, I think it would be very easy to fit this kind of notification into the kqueue interface. The kevent structure has enough space to specify the address, the value, and the operation. When you do the kevent system call, the kernel has everything that it needs to do the equivalent of a _umtx_op system call to atomically check the value and register for the event. There’s no reason that you couldn’t have an EVFILT_UMTX that would let you wait for multiple userspace mutexes / semaphores / whatever. I think you’d probably need to require that it used EV_ONESHOT, but that’s a fairly minor restriction.

                                                                          The root cause of this difference is that not every event source has a persistent kernel object associated with it. In the cast of a futex, the lock object only exists (from the kernel’s perspective) as long as one or more threads is waiting on it. This is intentional: userspace can have as many mutexes as it has memory for and it only needs the kernel to pay attention to them when they’re blocking, so the amount of kernel state is significantly lower than for pure-kernel mutexes.

                                                                        1. 2

                                                                          Reminds me of one of my favorite man page BUGS entries, for indent(1), relevant to the coding style guide:

                                                                          BUGS

                                                                          indent has even more switches than ls(1).
                                                                          
                                                                          1. 4

                                                                            This article is puzzling.

                                                                            up to the 8.0.1 version, which was the latest version released under the NCSA license. From then all later LLVM versions have been released under the Apache 2.0 license, which couldn’t be included in OpenBSD.

                                                                            A compromise was thus inevitable, and LLVM 10.0.0 was imported into -current in August 2020.

                                                                            so that couldn't was just subjective?!

                                                                            1. 4

                                                                              I was curious what the objections to the Apache 2.0 licence were as I generally consider it to be quite permissive. I found this info:

                                                                              The original Apache license was similar to the Berkeley license, but source code published under version 2 of the Apache license is subject to additional restrictions and cannot be included into OpenBSD. In particular, if you use code under the Apache 2 license, some of your rights will terminate if you claim in court that the code violates a patent.

                                                                              A license can only be considered fully permissive if it allows use by anyone for all the future without giving up any of their rights. If there are conditions that might terminate any rights in the future, or if you have to give up a right that you would otherwise have, even if exercising that right could reasonably be regarded as morally objectionable, the code is not free.

                                                                              In addition, the clause about the patent license is problematic because a patent license cannot be granted under Copyright law, but only under contract law, which drags the whole license into the domain of contract law. But while Copyright law is somewhat standardized by international agreements, contract law differs wildly among jurisdictions. So what the license means in different jurisdictions may vary and is hard to predict.

                                                                              https://www.openbsd.org/policy.html

                                                                              1. 1

                                                                                Well, “couldn’t” always implies a possibility not a certainty. Also, the compromise was related to a choice between GPL v3 and APL 2.0.

                                                                                1. -2

                                                                                  And they chose protecting patent holders over users of software who want the right to inspect and modify it? Cool cool. I’m always glad to see OpenBSD putting security first.

                                                                                  1. 5

                                                                                    What parts of OpenBSD can users not “inspect and modify?”

                                                                                    1. 1

                                                                                      The proprietary systems that are derived from OpenBSD where as a user your security fixes come at the whims of a vendor.

                                                                                      1. 2

                                                                                        I can’t follow the connection you’re trying to make between OpenBSD’s licensing stance and users of proprietary systems being beholden to vendors for security fixes.

                                                                                        If license was the critical piece to that puzzle I’d expect to see the millions of Android users guerrilla patching vulns on their phones and not stuck beholden to Samsung, Sony, etc. After all, it’s a Linux kernel under the GPLv2…which the FSF says is for protecting the rights of users to inspect and modify the code.

                                                                              1. 22

                                                                                The fact that we even need to worry about sandboxing for looking at glorified text documents is embarrassing.

                                                                                1. 27

                                                                                  Your PDF reader also ought to be sandboxed; malicious PDF documents have used to hack people.

                                                                                  Ideally, your ODT reader also ought to be sandboxed. There have been RCE bugs in LibreOffice where malicious documents could exploit people.

                                                                                  Reading untrusted user input is hard. Hell, even just font parsing is fraught with issues; Windows’s in-kernel font parser was a frequent target of bad actors, so Microsoft sandboxed it.

                                                                                  Sandboxing is almost always a good idea for any software which has to parse untrusted user input. This isn’t a symptom of “the web is too complex”; it’s a symptom of “the web is so widely used that it’s getting the security features which all software ought to have”.

                                                                                  The web is also too complex, but even if it was just basic HTML and CSS, we would want browsers sandboxed.

                                                                                  1. 2

                                                                                    Maybe the parent comment should be rewritten as, “The fact that we even need to worry about sandboxing for using untrusted input is embarrassing.” A lot of these problems would be solved if memory-safe languages were more widely used. (Probably not all of them, but a lot.)

                                                                                  2. 23

                                                                                    We need to worry about sandboxing for any file format that requires parsing, if it comes from an untrusted source and the parser is not written in a type-safe language. In the past, there have been web browser vulnerabilities that were inherited from libpng and libjpeg and were exploitable even on early versions of Mosaic that extended HTML 1.0 with the <img> tag. These libraries were written with performance as their overriding concern: when the user opens an image they want to see it as fast as possible and on a 386 even an optimised JPEG decoder took a user-noticeable amount of time to decompress the image. They were then fed with untrusted data and it turned out that a lot of the performance came from assuming well-formed files and broken with other data.

                                                                                    The reference implementation of MPEG (which everyone shipped in the early ’90s) installed a SIGSEGV handler and detected invalid data by just dereferencing things and hoping that it would get a segfault for invalid data. This worked very well for catching random corruption but it was incredibly dangerous in the presence of an attacker maliciously crafting a file.

                                                                                    1. 8

                                                                                      when the user opens an image they want to see it as fast as possible and on a 386 even an optimised JPEG decoder took a user-noticeable amount of time to decompress the image.

                                                                                      Flashback to when I was a student with a 386 with 2MB of RAM and no math co-processor. JPGs were painful until I found an application that used lookup-tables to speed things up.

                                                                                      1. 6

                                                                                        The reference implementation of MPEG (which everyone shipped in the early ’90s) installed a SIGSEGV handler and detected invalid data by just dereferencing things and hoping that it would get a segfault for invalid data. This worked very well for catching random corruption but it was incredibly dangerous in the presence of an attacker maliciously crafting a file.

                                                                                        I think this tops the time you told me a C++ compiler iteratively read linker errors.

                                                                                        1. 2

                                                                                          We need to worry about sandboxing for any file format that requires parsing, if it comes from an untrusted source and the parser is not written in a type-safe language.

                                                                                          But the vulnerabilities in the web are well beyond this pretty low-level issue.

                                                                                          1. 4

                                                                                            Bad form to reply twice, but I realise my first reply was just assertion. [Google Project Zero categorised critical vulnerabilities in Chrome and found that 70% of them were memory safety bugs]. This ‘pretty low-level issue’ is still the root cause of the majority of security vulnerabilities in shipping software.

                                                                                            1. 3

                                                                                              They often aren’t. Most of them are still memory safety violations.

                                                                                          2. 21

                                                                                            The web is an application platform now. It’s wasn’t planned that way, it’s suboptimal, but the fact that it is has to be addressed.

                                                                                            1. 17

                                                                                              Considering web browsers to be “glorified text documents” is reductive. We have long passed the point where this is the priority of the web, or the intention of most of it’s users. One cannot just ignore that a system that might have been designed for one thing 30 years ago, now has to consider the implications of how it has changed over time.

                                                                                              1. 5

                                                                                                The fact that we even need to worry about sandboxing for looking at glorified text documents is embarrassing.

                                                                                                Web browsers are now a more complete operating system than emacs.

                                                                                                1. 4

                                                                                                  The fact that sandboxing is still a typical strategy, as opposed to basic capability discipline, is embarrassing. The fact that most programs are not capability-safe, let alone memory-safe, is embarrassing. The fact that most participants aren’t actively working to improve the state of the ecosystem, but instead produce single-purpose single-use code in exploitable environments, is embarrassing.

                                                                                                  To paraphrase cultural critic hbomberguy, these embarrassments are so serious that we refuse to look at them directly, because of the painful truths. So, instead, we reserve our scorn for cases like this one with Web browsers, where the embarrassment is externalized and scapegoats are identifiable. To be sarcastic: Is the problem in our systemic refusal to consider safety and security as fundamental pillars of software design? No, the problem is Google and Mozilla and their employees and practices and particulars! This is merely another way of lying to ourselves.

                                                                                                  1. 4

                                                                                                    It’s text documents from unknown/untrusted sources over a public network. If you knew exactly what the text document contained a priori you wouldn’t need to blindly fetch it over the internet, no?

                                                                                                    To me the issue is we’ve relaxed our default trust (behaviorally speaking) to include broader arbitrary code execution via the internet…but the need for trust was always there even if it’s “just text.”

                                                                                                  1. 1

                                                                                                    Only tangentially related, but I learned the other day that WSL2 (currently) requires Hyper-V, which is (currently) only supported on Windows 10 Pro edition. Due to a wontfix incompatibility between GHC and WSL1, this prevents me from installing Haskell / GHC on my Windows 10 machine.

                                                                                                    Software development on Windows has come a long way, but it’s corner cases like these that prevent me from switching from Ubuntu. Generally things “just work” on Linux, but it’s a constant fight to get running on Windows.

                                                                                                    1. 1

                                                                                                      Have you even tried enabling it on Home? WSL2 uses the Hyper-V runtime, which is also supposed to be included in Home edition.

                                                                                                      1. 2

                                                                                                        Are you sure? I tried following the WSL2 upgrade instructions and ran into some errors about virtualization on the command line. I did some digging to try to resolve it, but there is no “Hyper-V” option available in the “Turn Windows features on or off” menu, and the “Upgrade to Windows 10” option in the Microsoft Store lists Hyper-V support as one of the benefits of upgrading.

                                                                                                        1. 2

                                                                                                          It looks like it should be available: https://docs.microsoft.com/en-us/windows/wsl/wsl2-faq

                                                                                                      2. 1

                                                                                                        I run WSL2 on Windows 10 Home with no problem.

                                                                                                        On one of my computers though, Id id have to restart into the bios* settings to enable it though, then the option appeared in Windows. You might need to do that too.

                                                                                                        • i know it isn’t bios anymore but i forget the new term lol
                                                                                                        1. 1

                                                                                                          i know it isn’t bios anymore but i forget the new term lol UEFI?

                                                                                                      1. 2

                                                                                                        I’ve written my fair share of crap.c files to test things.

                                                                                                        1. 2

                                                                                                          But why?

                                                                                                          There is already a dhcpclient process. What’s the use of this new one?

                                                                                                          1. 6

                                                                                                            Maybe there is more, here is what I saw on my system so far:

                                                                                                            • dhclient actively waits for a DHCP lease, dhcpleased does it in the background. Noticeable especially during boot since this saves you some time. YMMV, in my case boot got ~2s faster.
                                                                                                            • It can serve multiple interfaces at the same time, dhclient needs one process per interface.
                                                                                                            • It uses unveil so only the necessary parts of the file systems are visible to the 3 processes, access to all other parts is forbidden. The unprivileged process of dhclient doesn’t use unveil
                                                                                                            • pledge is stricter for all 3 processes compared to dhclient, especially the process that parses untrusted code input only has the ‘stdio’ pledge
                                                                                                            1. 3

                                                                                                              Another thing I forget to mention and very much like about all software written by florian@. The daemon requires no configuration at all. You start it up and it does exactly what it should without turning any knobs, writing config files, etc

                                                                                                            2. 2

                                                                                                              dhclient(8) is just that: a client process. dhcpleased(8) is a daemon that will handle all interfaces on the host that are set for auto configuration of ipv4 addresses and acquire addresses for them.

                                                                                                              Other platforms do similar things, one example being dhcpcd: https://wiki.archlinux.org/index.php/Dhcpcd

                                                                                                            1. 4

                                                                                                              Gradle is one of those things where you have to work with, you take 2 hours to read some documentation, come back to your file and still have no clue what to do with it.

                                                                                                              There’s so much complexity in it, even for basic projects that I just want to switch to an alternative each time I have to work with it…

                                                                                                              1. 1

                                                                                                                Or, as the author did, you get someone else to write the Gradle tasks and config then hope you never need to change it. :-)

                                                                                                                The part where the author talks about the “magic” parts of Gradle and multiple ways to do things really hit home. I’ve spent too many hours googling for examples or answers and only finding seemingly contradictory approaches or advice. On the surface it’s not always easy to spot equivalence when you’re mixing Groovy imperative approaches and the DSL declarative approaches.