1. 1

    Basically guards for the DOM. Also sounds a lot like the SafeStrings proposal I posted earlier.

    When are we going to get a language with gradual dependent typing, runtime checks, and OCaps? :P

    1. 2

      The paper claims that there is an implementation somewhere but doesn’t link to it. I’ve requested access.

      1. 1

        Yes! I wanted more work in this area. That and solvers are only way to make it practical. I figure it’s going to be a mix of experts laying supporting theories (eg on arrays, floats) with automation connecting them. After it starts working, we can ask folks not get funded unless they use the solvers and test generators to speed up the work.

        1. 2

          Indeed! I wish the authors had included analysis on how much this tool could boost overall productivity. It’s hard to judge how useful this would be as a guided proof assistant.

          72 of the 79 generated proofs were <= 10 proof steps, which is impressive given that each proof consists of ~100 commands! However, the commands within the 10-step-proofs consisted of only ~20% of the ~12,000 total proof commands. Why not break the longer (and ostensibly more sophisticated) proofs into shorter ones (as CoqGym did)?

          That being said, CoqGym and PB9K1 reaching 50-200% of CoqHammer’s solve rate is impressive: CoqHammer’s backends have had years commercial development (preceded by decades of research). The authors are probably busy improving their algorithms and planning on more sophisticated analysis later. Hopefully we’ll see more qualitative discussion in the future.

          1. 1

            Well, your comment tells me there’s at least straight-forward avenues for improvement. Breaking things up. They should try to merge those ideas first just to get better results. That gets more funding and bides time for them coming up with the real leap a year or two later. ;) Also, I’d rather them apply it to more proof efforts to generalize it. The nature of compilers might make them easier to do this on than say OS kernels or data structures mutating graphs.

            One problem that may kick in is that I hear tools like Coq change in a way where old proofs don’t work on newer versions. If true, that might make building a training set of exemplar proofs more difficult. If no standardization, might need converters that transform older ones to new representation. I don’t know if that’s going to be just a transpiler or would require understanding the logic to point that it’s improbable/impossible.

        1. 7

          Big fan of work like this. Good article. I’ll note a good and bad point.

          The good point is forcing user-space processes to donate their own resources to kernel calls. I’ve been promoting that wherever possible ever since I saw INTEGRITY-178B do it. Although unsure if they did, one might also want to apply that to partitioned subsystems for networking, filesystems, and graphics. Then, malicious code usually only DDOS’s or leaks within its partition thanks to fewer shared resources.

          The bad point is the security vs performance claim. We default on security and performance being inverse for a reason. In many situations, you get better performance through resource sharing, increased coupling, and/or removing checks. Each of these can create security vulnerabilities. Additionally, layered systems operating on potentially-malicious inputs might have several levels of somewhat-redundant checks covering for how data safe early on might become malicious in the middle. Finally, monitoring cost something, too.

          1. 7

            The bad point is the security vs performance claim. We default on security and performance being inverse for a reason. In many situations, you get better performance through resource sharing, increased coupling, and/or removing checks. Each of these can create security vulnerabilities. Additionally, layered systems operating on potentially-malicious inputs might have several levels of somewhat-redundant checks covering for how data safe early on might become malicious in the middle. Finally, monitoring cost something, too.

            Their point is that you won’t see adoption without competitive performance. The first generation of microkernels dropped user space drivers because their IPC was too slow [1], so the L4 family [2] pay special attention to IPC overhead. seL4 still brings some critical operations into kernel space (timing drivers and scheduling) despite the verification overhead.

            FWIW, the same holds true regarding usability and security: if you don’t nail usability, then users will circumvent the security protocols.

          1. 5

            Our recent work on [side channel] protection indicates that we can solve this problem in seL4, by temporally or spatially partitioning all shared hardware resources, we partition even the kernel. This assumes that the hardware manufacturers get their act together and provide mechanisms for resetting all time-shared resources (and I’m working on the RISC-V Foundation to ensure that RISC-V does).

            This is why we need Free and Open Source systems, so that we can solve problems collectively. Closed shops get away with it because they hide their terrible code in opaque binary blobs. Those sins are usually the result of short sighted management and FOSS has a way of forcing companies to do better. From Dave Airlie’s rejection of the initial AMDGPU driver (which had an HAL):

            There have been countless requests from various companies and contributors to merge unsavoury things over the years and we’ve denied them. They’ve all had the same reasons behind why they couldn’t do what we want and why we were wrong, but lots of people have shown up who do get what we are at and have joined the community and contributed drivers that conform to the standards.

            Here’s the thing, we want AMD to join the graphics community not hang out inside the company in silos. We need to enable FreeSync on Linux, go ask the community how would be best to do it, don’t shove it inside the driver hidden in a special ioctl. Got some new HDMI features that are secret, talk to other ppl in the same position and work out a plan for moving forward.

            1. 9

              This unfortunately doesn’t cover current Thunderbolt 3 (and maybe soon to be USB 4.0) cables. They use the same USB-C connector but have their own range of capabilities and add their own confusion in the mix by not clearly identifying which cables support what.

              For Thunderbolt on MacOS, you get a “Cannot Use Thunderbolt Accessory” notification when you plug the device in if it’s not working properly but there’s no additional information on why it’s not working or any indication whether it’s due to a cabling issue or other hardware failure.

              1. 5

                The whole situation is a total catastrophe.

                1. 2

                  The way I see it, there are two dimensions: connectors and capabilities. If we want to support each capability in each connector (both ends of the wire), we’ll need to support the whole 2-D space, obviously. Perhaps the naming could be improved, but I really don’t see any problems with the number of combinations out there. Unless people are okay with losing compatibility (physical/software), which everyone is okay with unless things break for them.

                2. 1

                  Maybe USB 4 will help clarify the issue: force all USB-4 cables to be Type C and limit the varieties to those with or without power delivery. Then you just have to make sure it’s a USB-4 cable, no more googling for "USB 3.1" "gen 2" "5 Amp"|"5A".

                1. 5

                  What I appreciate most of all is that nobody apparently thought about how to design USB-C plugs so they didn’t slide out.

                  1. 3

                    Sweet baby Jesus why would you want that? Personally I’m annoyed at how difficult it is to pull out a USB-C compared to the (now) old-fashioned MagSafe. When it’s finally time to replace the wife’s old laptop with whatever’s current at the time, I fear for its life.

                    1. 2

                      I loved MagSafe connectors and thought them up years before Apple introduced them: magnets get rid of mechanical wear-and-tear while making it easier to plug in! I’m guessing they weren’t used in USB-3 because of the connector size and magnetic interference.

                    2. 2

                      It seems to me that they did. My regular phone charger slides out way too easily, but my laptop charger (when either plugged into my laptop, or my phone) is quite good at staying in until I try and pull it out.

                      1. 3

                        Have you checked your phone’s usb-c port and mauybe tried cleaning it with a toothpick? :)

                        Not sure this is intentional, but with my Nexus 5X the lint seems to set in such a way that the usb-c cable slides out with the slightest touch once enough has accumulated. The connection is never broken so you’d notice, just the mechanical “lock”.

                      2. 2

                        I’ve personally always experienced that issue more often with micro-A than I have with any of my devices with C.

                        1. 1

                          I’ve been so annoyed by this that I’m pondering whether USB-C cables can be used for electronics which don’t get a gentle treatment all of the time (badges, smaller electronic cards, …).

                        1. 6

                          This describes the different compliant varieties but to make things yet more complicated it sounded like for some time there were a lot of manufacturers who were producing incorrectly-terminated cables. There was Benson Leung naming-and-shaming them for a while but I don’t know if that kind of scrutiny is necessary anymore. http://bensonapproved.com redirects and I can’t seem to access that site anymore.

                          1. 6

                            For the record Benson Leung is the author of this very post.

                            1. 2

                              I bought a cord he approved, it was a PoS. I think they got a bump from his endorsement and then cut quality to reap the profits. He’s only one person and he can’t continually test cables at his own expense, the USB licensors really need to implement some sort of QA process.

                            1. 8

                              I’ve said it before and I’ll say it again: ZFS should be the default on all Linux distros. It’s in a league of its own, and makes all other existing Linux filesystems irrelevant, bizarre licensing issues be damned.

                              1. 7

                                I use ZFS and love it. But I disagree that ZFS should be the default as-is. It requires a fair bit of tuning. For non-server workloads, the ARC in particular. ZFS does not use Linux’ buffer cache and while ARC size adapts, I have often seen on lower memory machines that the ARC takes too much memory at a given point, leaving too little memory for the OS and applications. So, most users would want to tune zfs_arc_max for their particular workload.

                                I do think ZFS should be available as an option in all Linux distributions. It is simply better than the filesystems that are currently provided in the kernel. (Maybe bcachefs will be a competent alternative in the future.)

                                1. 2

                                  I agree.

                                  I remember installing FreeBSD 11 once (with root on ZFS) because I needed a machine remotely accessible via SSH to handle files on an existing disk with ZFS.

                                  No shizzle, FreeBSD defaults, the machine had 16G of RAM, and during an hours long scp run, ARC decided to eat up all the memory, triggering the kernel into killing processes… including SSH.

                                  So I lost access, had to restart scp again (no resume, remember), etc. This is a huge show stopper and it should never happen.

                                  1. 1

                                    That seems like a bug that should be fixed. Don’t see any reason why that should prevent it from being the default though.

                                  2. 1

                                    That’s definitely something to consider, however, Apple has made APFS (ZFS inspired) the default on macOS, so there’s got to be a way to make it work for ZFS + Linux Desktop too. ZFS is all about making things work without you having to give it much thought. Desktop distros can pick reasonable defaults for desktop use, and ZFS could possibly make the parameter smarter somehow.

                                  3. 2

                                    I think the licensing issue is the primary problem for Linux distros.

                                    1. 1

                                      I agree on technical superiority. What about the Oracle threat given its owner pulled off that API trick? Should we take the risk of all owing Oracle’s lawyers money in some future case? Or rush to implement something different that they don’t control with most of its strengths? I think the latter makes the most sense in the long-term.

                                      1. 3

                                        Oracle is not a problem, as the ZFS license is not being violated – it is the Linux license.

                                        1. 1

                                          “Oracle is not a problem, as the ZFS license is not being violated”

                                          That’s a big claim to make in the event large sums of money are ever involved. Oracle threw massive amounts of lawyers at Google with the result being API’s were suddenly a thing they could copyright. Nobody knew that before. With enough money and malicious intent, it became a thing that could affect FOSS developers or anyone building on proprietary platforms. What will they do next?

                                          I don’t know. Given they’re malicious, the safest thing is to not use anything they own or might have patents on. Just stay as far away from every sue-happy party in patent and copyright spaces. Oracle is a big one that seeks big damages for its targets on top of trying to rewrite the law in cases. I steer clear of their stuff. We don’t even need it, either. It’s just more convenient than alternatives.

                                          1. 8

                                            The CDDL, an OSI-approved open source licensed, includes both a copyright and patent grant for all of the code released by Sun (now Oracle). Oracle have sued a lot of people for a lot of things, but they haven’t come after illumos or OpenZFS and there are definitely companies using both of those bodies of software to make real money.

                                            1. 2

                                              I think you’re missing the implications of they effectively rewrote the law in the case I referenced. If they can do that, it might not matter what their agreements say if it’s their property. The risk might be low enough that it never plays out. One just can’t ever know if they depend on legal provisions with a malicious party that tries to rewrite laws in its favor with lobbyists and lawyers.

                                              And sometimes succeeds unlike basically everyone doing open source and free software. Those seem to barely enforce their agreements and/or be vulnerable to patent suits in case of the permissive licenses. Plus, could the defenders even afford a trial at the current rates?

                                              I bet 10 years ago you wouldn’t have guessed a mobile supplier using an open-ish platform would be fighting to avoid giving over $8 billion dollars to an enterprise-focused, database company. Yet, untrustworthy dependencies let that happen. And we got lucky it was a rich company that depended on OSS/FOSS stuff defending. The rulings could’ve been worse for us if it wasn’t Google.

                                              1. 6

                                                Seeing as Sun gave ZFS away before Oracle bought it, Oracle would have a LOT of legal wackiness to get the CDDL license revoked somehow. But for the safe of argument, let’s assume they do manage somehow to make it invalidated, and went nuts and decided to try and charge everyone currently using ZFS pay bajillions of dollars for “their” tech. Laws would have to change significantly for that to happen, and with such a significant change in current law, there is basically zero chance it would be retro-active from the moment you started using ZFS, so worst case you’d have to pay from the time of the law change. That is if you didn’t just move off of ZFS after the law changed and be out zero dollars.

                                                Also, the OSS version of ZFS is significantly different from Oracle’s version that they are sort of kissing cousins at best anymore. ZFS has been CDDL licensed since 2005, so a long history of divergence from the Oracle version. I think Oracle would have a VERY hard time getting the OSS version back under the Oracle banner(s). Even with very hypothetical significant law changes.

                                                I’m in favour of things competing against ZFS, but currently nothing really does.. BTRFS tries, but their stability record is pretty miserable for anything besides the simplest workloads. ZFS has had wide production usage since 2001. Maybe in another 5 or 10 years we will have a decent stable competitor to some of ZFS’s feature-sets.

                                                But regardless if you are a large company with something to lose, your lawyers will be the ones advising you about using ZFS or not, and Canonical’s lawyers clearly decided there was nothing to worry about, Along with Samsung(who own Joyent, the people behind Illumos). There are also many other large companies that have bet big on Oracle having basically zero legal leg to stand on.

                                                Of course the other side of the coin is the ZFS <-> Linux marriage, but that’s easy just don’t run ZFS under Linux, or use the Canonical shipped version and let Canonical take all the legal heat.

                                                1. 2

                                                  Best counterpoints so far. I’ll note this part might not be as strong as you think:

                                                  “and Canonical’s lawyers clearly decided there was nothing to worry about, Along with Samsung(who own Joyent, the people behind Illumos)”

                                                  The main way companies dodge suits is to have tons of money and patents themselves to make the process expensive as hell for anyone that tries. Linux companies almost got patent sued by Microsoft. IBM, a huge patent holder, stepped up saying they’d deal with anyone that threatened it. They claimed they were putting a billion dollars into Linux. Microsoft backed off. That GPL companies aren’t getting sued made Canonical’s lawyers comfortable but not an actual assurance. Samsung is another giant, patent holder with big lawyers. It takes an Apple-sized company to want to sue them.

                                                  So, big, patent holders or projects they protect are outliers. That might work to ZFS’s advantage here. Especially if IBM used it. They don’t prove what will happen with smaller companies, though.

                                                  1. 2

                                                    I agree with you in theory, but not in practice because of the CDDL (which ZFS is licensed under). This license explicitly grants a “patent peace” see: https://en.wikipedia.org/wiki/Common_Development_and_Distribution_License

                                                    I know most/many OSS licenses sort of wimp out on patents and ignore the problem, CDDL doesn’t. Perhaps it could have even stronger language, and there might be some wiggle room for some crazy lawyering.. I just don’t really see Oracle being THAT crazy. Oracle, being solely focused on $$$$, would have to see some serious money bags to go shake loose, I doubt they would ever bother going after anyone not the size of a Fortune 500, the money just isn’t there. Google has giant bags full of money they don’t even know what to do with, so Oracle trying to steal a few makes sense. :P

                                                    Oracle going after Google makes sense knowing Oracle, and it was , like you said, brand new lawyering, trying to create API’s out of Copyrights. Patents are not remotely new. So some lawyer for Oracle would have to dream up some new way to screw up laws to their advantage. Possible sure, but it would be possible for any other crazy lawyer to go nuts here (wholly unrelated to ZFS or even technology), it’s not an Oracle exclusive idiocy. Trying to avoid unknown lawyering that’s not even theoretical at this point would be sort of stupid I would think… but I’m not a lawyer.

                                                    1. 1

                                                      “I know most/many OSS licenses sort of wimp out on patents and ignore the problem, CDDL doesn’t.”

                                                      That would be re-assuring on patent part.

                                                      “Possible sure, but it would be possible for any other crazy lawyer to go nuts here (wholly unrelated to ZFS or even technology), it’s not an Oracle exclusive idiocy. Trying to avoid unknown lawyering”

                                                      Oracle was the only one to flip software copyright on its head like this. So, I don’t think it’s an any company thing. Either way, the threat I’m defending against isn’t unknown lawyering in general: it’s unknown lawyering of a malicious company whose private property I may or may not depend on. When you frame it that way, one might wonder why anyone would depend on a malicious company at all. Avoiding that is a good pattern in general. Then, the license negates some amount of that potential malice for a great product with unknown, residual risk.

                                                      I agree the residual risk probably won’t affect individuals, though. An Oracle-driven risk might affect small to mid-sized businesses depending on how it plays out. Good news is swapping filesystems isn’t very hard on Linux and BSD’s. ;)

                                            2. 4

                                              AFAIK, it’s the GPL that’s being violated. But I’m really tired and the SFC does mention something about Oracle suing so 🤷.

                                              Suing based on the use of works derived from Oracle’s CDDL sources would be a step further than the dumb Google Java lawsuit because they haven’t gone after anyone for using OpenJDK-based derivatives of Java. Oracle’s lawsuit-happy nature would, however, mean that a reimplementation of ZFS would be a bigger target because it doesn’t have the CDDL patent grant. Of course, any file system that implements one of their dumb patents could be at risk….

                                              I miss Sun!

                                        2. 1

                                          What does ZFS have that is so much better than btrfs?

                                          I’m also not sure these types of filesystems are well suited for databases which implement their own transactions and COW, so I’m not sure I would go as far as saying they are all irrelevant.

                                          1. 11

                                            ZFS is extremely stable and battle-tested, while that’s not a reason in itself to make it a better filesystem, it makes it a extremely safe option when what you’re looking for is something stable to keep your data consistent.

                                            It is also one of the most cross-platform file system. Linux, FreeBSD, MacOS, Windows Illumos. It has a huge amount of development behind it, and as of recently the community has come together significantly across the platforms. Being able to export your pool on FreeBSD and import it on Linux or another platform makes it a much better option if you want to avoid lock-in.

                                            Additionally, the ARC

                                            Problems with btrfs that make it not ready:

                                            1. 0

                                              If I don’t use/want to use RAID5 then I don’t see the problem with btrfs.

                                              1. 3

                                                I ran btrfs in production on my home server for ~3-4 years, IIRC. If you want to use btrfs as a better ext4, e.g. just for the compression and checksumming and maybe, maybe snapshotting, then you’re probably fine. If you want to do anything beyond that, I would not trust it with your data. Or at the very least, I wouldn’t trust it with your data that’s not backed up using something that has nothing to do with btrfs (i.e. is not btrfs snapshots and is not btrfs send/receive).

                                                I had three distinct crashes/data corruption problems that damaged the filesystem badly enough that I had to back up and run mkfs.btrfs again. These were mostly caused by interruptions/power failures while I was making changes to the fs, for example removing a device or rebalancing or something. Honestly I’ve forgotten the exact details now, otherwise I’d say something less vague. But the bottom line is that it simply lacks polish. And mind you, this is from the filesystem that is supposed to be explicitly designed to resist this kind of corruption. I know at least the last case of corruption I had (which finally made me move to ZFS) was obviously preventable but that failure handling hadn’t been written yet and so the fs got into a state that the kernel didn’t know how to handle.

                                            2. 3

                                              well, I don’t know about better, but ZFS has the distinct disadvantage of being out of tree filesystem so it can and will break depending completely on the whims of kernel development. How anyone can call this stable and safe for production use is beyond me.

                                              1. 3

                                                I think the biggest argument is mature implementations used by large numbers of people. That catches lots of common and uncommon problems. In reliability-focused filesystems, that the reliability is field-proven then constantly maintained is more important to me than about anything. The only reason I don’t use it is that it came from Oracle with all the legal unknowns that can bring down the line.

                                                1. 3

                                                  When you say “Oracle”, are you referring to ZFS or btrfs? ;)

                                                  1. 1

                                                    Oh shit! I didn’t know they designed both! Glad I wasn’t using btrfs either. Thanks for the tip haha.

                                                2. 2

                                                  On a practical level, ZFS is a lot more tested (in Solaris/Illumos, FreeBSD, and now Linux); more different people have put more terabytes of data in and out of ZFS than they seem to have for btrfs. This matters because we seem to be unable to build filesystems that don’t run into corner cases sooner or later, so the more time and data a filesystem has handled, the more corner cases have been turned up and fixed.

                                                  On a theoretical level, my personal view is that ZFS picked a better internal structure for how its storage is organized and managed than btrfs did (unless btrfs drastically changed things since I last looked several years ago). To put it simply, ZFS is a volume manager first and then a filesystem manager second (on top of the volumes), while btrfs is (or was) the other way around (you manage filesystems and volumes are a magical side effect). ZFS’s model does more (obvious) violence to Linux IO layering than I think btrfs’s does, but I strongly believe it is the better one and gives you cleaner end results.

                                                3. 0

                                                  Why would I want to run ZFS on my laptop?

                                                  1. 1

                                                    Why wouldn’t you want to run it on your laptop?

                                                1. 5

                                                  A more accurate sub-heading would be that “no method” of verification caught Selfie, including formal verification. Cryptographer’s review, the secure coders, tests, formal verification… nothing that was applied caught it. Then, it’s not giving impression formal verification uniquely failed. It’s just elaborating on one of many failures.

                                                  “At first glance, Selfie does indeed seem to fly in the face the unprecedented effort to formally verify TLS 1.3 as secure.”

                                                  I prefer to be consistent. Teams like seL4 say the verification is that the implementation meets the spec, not absolutely secure. If we’re echoing this, we might say that some or many people thought TLS was secure due to formal verification. However, it was only verified against the spec that addressed attacks we understood. New attacks require new specs and tools with changes to the implementation. That simple.

                                                  1. 3

                                                    I prefer to be consistent. Teams like seL4 say the verification is that the implementation meets the spec, not absolutely secure. If we’re echoing this, we might say that some or many people thought TLS was secure due to formal verification. However, it was only verified against the spec that addressed attacks we understood. New attacks require new specs and tools with changes to the implementation. That simple.

                                                    The author devoted the last third of the article to explaining this.

                                                    1. 2

                                                      One of the top priorities on my to-do list is a blog post reframing formal verification in the high assurance toolbelt. Right underneath that is reframing capabilities as automatically generated, finely grained sandboxing.

                                                      1. 1

                                                        That fits a lot of uses for capabilities. Developers can learn advanced stuff later on. Is there a brief summary of the reframing for formal verification? And do submit it here when done. :)

                                                    1. 1

                                                      Yikes, they want you to embed arbitrary JS code. I have designed a commenting system with ironclad security (it’s impossible to perform SQL injection, very difficult to compromise the parser, JS exploits require a browser zero-day) and and graceful degradation for no-JS users.

                                                      Guess I should get around to that.

                                                      1. 3

                                                        From the top comment:

                                                        If the kernel is going to impose its view of what a container is, the question becomes which container construction should it be? The obvious answer might be what docker/kubernetes does, but some of those practices (like no user namespace, pod shared ipc and net namespace) are somewhat incompatible with what LXC does and they’re definitely wholly incompatible with other less popular container use cases, like the architecture emulation containers I use to maintain cross arch builds of my projects.

                                                        As someone who has worked on a popular OSS project in the past, it’s incredibly frustrating when someone doesn’t bother to see if such changes have been rejected in the past. Instead they just write a patch and act like jerks when it is rejected, “I wasn’t party to [the prior consensus against this change] and don’t feel particularly bound by it”.

                                                        I know you’re upset that all your effort has been wasted, but maybe you should have spent 10 minutes on IRC first. Now core developers have to waste their time rehashing old issues instead of writing code 😡.

                                                        1. 1

                                                          Also, the rejected proposal was their own. Seems like basic intellectual honesty to at least refer to the earlier rejection, and state what if anything is different this time around.

                                                        1. 2

                                                          They conclude that new Spectre variants are unavoidable due to the performance benefits of hyperthreading, but I saw an opinion piece by a formal methods researcher outlining the changes that are required to processor specifications to prevent future Spectre attacks. However, I can’t find it now. Anyone know of the piece I’m speaking of?

                                                          1. 1

                                                            I want this API. I’m tired of docker and the mess it has created.

                                                            1. 1

                                                              What mess has it created? I’m a bit out of the loop with the whole container world but we are evaluating things for work and I would like to learn more about what is the current state of containers.

                                                              1. 4

                                                                It’s the least stable piece of system software I’ve ever used. If you’re trying to use more containers then I recommend using LXC. I wrote about some of the issues here: https://www.scriptcrafty.com/2018/01/impulse-response-programming/

                                                                1. 2

                                                                  Docker is implemented using a daemon with root privileges … there’s a reason all the adults in the room hates how Docker was originally built.

                                                                  1. 1

                                                                    Didn’t docker originally use LXC? The OP comment seems to be advocating LXC in a later reply as an alternative to docker. IIRC, the last time I tried to configure LXC to run in ‘unprivileged mode’, it was a major pain in the butt (but maybe this has improved?) What alternatives do you suggest?

                                                              1. 2

                                                                This article overflowing with some remarkable statements but this one is just amazing:

                                                                Front-end development is complex because design is complex.

                                                                Front-end development is complex because HTML grew via tug-of-war processes out of a heavily restricted SGML dialect with bizarre semantic/presentational crossover, CSS is straight-up insane (apart from grids which took >6 years to get implemented sufficiently broadly to have any effect, and its inheritance model and arcane platform incompatibility is still pretty nuts), and JS, though evolving from its original flat-out craziness with each iteration, is increasingly reliant on an increasingly byzantine toolchain to allow it do so, and suffers heavily from a sprawling, fragmented ecosystem perpetuated by extreme quality differentials and rampant wheel-reinvention, partly because people just like reinventing wheels but partly also because square, triangular, ovoid and zig-zag wheels are really bad for driving on. Implementing complex design on top of all that multiplies complexity, sure, but it’s hardly the cause.

                                                                1. 3

                                                                  Came here to share a similar sentiment: HTML, CSS, and JS are adopted children with lots of baggage. However, I disagree that this negates the author’s point: “solutions” to front-end development often involve an abstraction over one or more of the three technologies. What separates this from the standard “We can solve any problem by introducing an extra level of indirection …except for the problem of too many levels of indirection.” is the extreme number different abstractions, accelerated bit-rot, and level of inoperability.

                                                                  The root problem is that HTML, CSS, and JS are evolving technologies, which slowly erode whatever advantages a given abstraction has to offer. Polymer (which was sold as a library) went through 3 major point releases in three years and is now deprecated entirely. Angular has gone from version 2 to version 7 in two years. Frameworks have turned to micro-libraries to try and cope, but now a given “Angular” or “React” project might use half-a-dozen of different technologies that make them incompatible with another “Angular” or “React” project.

                                                                  Even if you try to hew closely to the original language, small differences between the proposal and the eventual standard will result in migraines for any large project. Babel (stupidly) transpiling import to require is a major reason Node.js adopted .mjs for all JS modules. TypeScript began as a straightforward superset of JavaScript, but JavaScript’s enhancements have slowly eroded compatibility and are often superior to what TypeScript is offering. I don’t see how TypeScript can adopt JavaScripts class members and other features without major structural changes for itself and all downstream projects (like Angular).

                                                                  I think the current landscape of frontend tools is the result of engineering practices at Facebook and Google: they have siloed technology stacks, large amounts of natural code churn, a monetary incentive to shave milliseconds of load time off of their websites, and armies of developers to service that technical debt. The rest of us want to be able to share code (god forbid data binding work across frameworks!) and not have to worry about major refactors every 6 months.

                                                                  1. 1

                                                                    All very good points, particularly the last para. Sometimes “the rest of us” get caught in the cross-fire. I mean, on the one hand obviously it’s amazing that we get free open-source tools to use (and that extends to Kubernetes et al on the back-end too), and I think React is pretty great in a lot of ways, but the pace of changes makes maintaining them complex and time-consuming, and in the case of e.g. Kubernetes it’s so heavily engineered that for “the rest of us” who don’t have planetary-scale deployments it can become massive overkill, yet it’s where everyone heads and piles all their support because Google are doing it. (I realise that’s kind of an extreme statement because there are plenty of people with larger deployments who benefit from it and even a small deployment can benefit from some features, but I think the general point stands.)

                                                                1. 11

                                                                  a chipmonger kills its webshit propogands after some employees complain

                                                                  If you can easily n-gate a submission, maybe it shouldn’t be here.

                                                                  Spam about ad campaigns and counterreactions is not a core value prop of lobsters. :(

                                                                  1. 18

                                                                    on the other hand, this story is currently on the front page with an above-median vote score, and the other riscv-basics story is the highest voted story currently on the front page, so evidently the users of lobsters found both relevant to their interests.

                                                                    Yours is some low-quality gatekeeping.

                                                                    1. 24

                                                                      News is the mindkiller. Humans are hardwired to be really interested in new things regardless of their utility, usefulness, or healthiness–you need look no further than the 24 hour news cycle or tabloids or HN front page to observe this phenomena.

                                                                      If you look at any given submission, it has a bunch of different things it’s “good” at: good in terms of educating about hardware, good in terms of talking about the math behind some compiler optimization, good in whatever. Submissions that are news are good primarily in terms of how new they are, and have other goodness that tangential if it exists at all. The articles may even have a significant off-topic component, such as politics or gossip or advertising.

                                                                      This results in the following pathologies:

                                                                      • Over time, if a community optimizes for news, they start to normalize those other components, until the scope of the submissions expands to encompass the formerly off-topic material…and that material is usually something that is at best duplicated elsewhere and at worst pure flamebait.
                                                                      • The industry we’re in specializes in spending loads of money on attractive clickbait and advertising presenting as news, and so soon the submissions become flooded with low-quality crap or advertising that takes up community cycles to process without ever giving anything substantial in return.
                                                                      • The perceived quality of the site goes down for everybody and the community fragments, because news is available elsewhere (thus, the utility of the site is diminished) and because the valuable discussion is taken up with nitpicking news stories. This is why, say, c2 wiki is still around and useful and a lot of news sites…aren’t.

                                                                      What you dismiss as gatekeeping is an attempt to combat this.

                                                                      EDIT:

                                                                      A brief note–your example of the two ARM articles being on the front page illustrate the issue. Otherwise intelligent lobsters will upvote that sort of stuff because it’s “neat”, without noting that with everybody behaving that way we’ve temporarily lost two good spots for technical content–instead, we have more free advertising for ARM (all press is good press) and now slightly more precedent for garbage submissions and call-response (news thing, rebuttal to news thing, criticism/analysis of rebuttal). It’s all so tiresome.

                                                                      1. 5

                                                                        ugggh, you leveled up my brain regarding what belongs on lobste.rs. “I like this!” is not only not necessarily an argument ‘for’, it is sometimes an argument ‘against’. Mind-blown.

                                                                        1. 2

                                                                          I bookmarked and often shared this post since it seemed like a nice set of guidelines. Had a lot of votes in favor, too.

                                                                          1. 1

                                                                            I thought we concluded that votes in favour represent anti-signal.

                                                                            1. 1

                                                                              Haha. Depends on the context. They’re important for meta threads since it can determine site’s future.

                                                                        2. 5

                                                                          This is interesting news, it’s not just drama or clickbait. The big chip makers have maintained an oligopoly through patents on abstract math: an ISA. It’s insane that innovation can only come from a few big players because of their lawyers. RISC-V is the first serious dent that the open source movement has been able to make in this market because (unlike ARM, OpenPOWER, and OpenSPARC) it has a serious commitment to open source and it is technologically superior.

                                                                          ARM will be the first player to fall to RISC-V because they have a monopoly on lower end chips. Samsung, Qualcomm, NVidia, Apple, Google, etc. are all perfectly capable of making a competitive chips without having to pay a 1% tax to ARM. We are already seeing this with Western Digital’s switch to RISC-V, there is no advantage to paying ARM for simple micro-controllers … which is a huge portion of ARM’s business.

                                                                          That they are resorting to FUD tactics shows that ARM execs know this. People interested in the larger strategic moves, like myself, find this article about how their FUD tactics backfired very interesting. I would appreciate it if you didn’t characterize this sort of news as spam and the people who follow how big industry players are behaving as just being into drama.

                                                                          1. 6

                                                                            With respect, a good deal of your post is kremlinology.

                                                                            That they are resorting to FUD tactics shows that ARM execs know this.

                                                                            The ARM execs cannot be guaranteed to “know” anything of the sort–it’s more likely that there is a standard playbook to be run to talk about any competing technology, RISC-V, OSS, or otherwise. Claiming that “oh ho obviously they feel the heat!” is speculation, and without links and evidence, baseless speculation at that.

                                                                            the people who follow how big industry players are behaving as just being into drama.

                                                                            The people who “follow” big industry players are quite usually just people that want to feel informed, and are quite unlikely to be anybody with any actual actions available given this information. Thus, just because something is interesting to them doesn’t make it necessarily useful or actionable.

                                                                            characterize this sort of news as spam

                                                                            Again, all news is spam on a site with historically more of a bend towards information and non-news submissions. Further, it’s not like this hasn’t been covered extensively elsewhere, on Slashdot and HN and Gizmodo and elsewhere. It’s not like it isn’t being shown on many fronts.

                                                                            Please understand that while in this specific case, you might have an interest–but if all lobsters follow this idea, it trashes the site.

                                                                            1. 2

                                                                              With respect, a good deal of your post is kremlinology.

                                                                              I’m not allowed to infer basic information about the internal state of an organization based on its public actions?

                                                                              That they are resorting to FUD tactics shows that ARM execs know this.

                                                                              The ARM execs cannot be guaranteed to “know” anything of the sort–it’s more likely that there is a standard playbook to be run to talk about any competing technology, RISC-V, OSS, or otherwise. Claiming that “oh ho obviously they feel the heat!” is speculation, and without links and evidence, baseless speculation at that.

                                                                              Do you understand why I might feel frustrated when someone mocks arguments defending a topic but then demands others provide extensive context to the conversation s/he inserted themselves into?

                                                                              It’s not like ARM hasn’t spoken out on this subject before; a high level ARM technology fellow debated RISC-V foundation members a couple of years ago. The debate sounds a lot like an early draft of the arguments presented on the FUD website: RISC-V can’t possibly replicate ARM’s ecosystem and design services.

                                                                              If you go look at the RISC-V foundation membership list, you will find a lot of ARM licensors and competitors including Qualcomm, Samsung, NVidia, IBM, Huawei, and Google. They are using RISC-V as a vehicle to jointly fund high-quality replacements of ARM’s IP, much of which consists of ISA patents and tooling. RISC-V has a very thorough patent review process, making it difficult to sue RISC-V manufacturers based on the ISA. There is a lot I don’t understand about the value ARM adds in terms of chip design and industry collaborations, but NVidia alone is worth 3x what SoftBank paid for ARM just two years ago.

                                                                              If ARM execs aren’t worried about RISC-V taking market share, they should be. ARM creating a FUD website is very strong, direct evidence that this is the case.

                                                                              The people who “follow” big industry players are quite usually just people that want to feel informed, and are quite unlikely to be anybody with any actual actions available given this information. Thus, just because something is interesting to them doesn’t make it necessarily useful or actionable.

                                                                              It feels like you are talking down to me and other interested readers. Are kernel hackers the only people allowed to be interested in kernel development news? I don’t get a lot of actionable information based on the latest scheduler drama, but (as a UX engineer) I am interested in the outcome of these debates.

                                                                              I came to Lobste.rs for a deeper understanding of the underlying technical and political factors at play here.

                                                                              Again, all news is spam on a site with historically more of a bend towards information and non-news submissions.

                                                                              I am open to this argument and I probably wouldn’t have perceived your comments so negatively had I not started from the standard definition of spam. Of course, I also understand that it is hard to justify the time to fit such nuance into a comment on an article : )

                                                                              You clearly have thought a lot about this and discussed it with others, but new and causal readers haven’t. Perhaps you could use less incendiary language? Just say that Lobste.rs focuses on non-news submissions and that you feel industry news is offtopic.

                                                                              Further, it’s not like this hasn’t been covered extensively elsewhere, on Slashdot and HN and Gizmodo and elsewhere. It’s not like it isn’t being shown on many fronts.

                                                                              The technical analysis on HN and other sites is … non-existent. I would love to hear more from experts with informed opinions on chip design and manufacture and that’s what I expected of the comments here.

                                                                              Please understand that while in this specific case, you might have an interest–but if all lobsters follow this idea, it trashes the site.

                                                                              Well, I’m kinda peeved that the comments section of both stories turned into a slow-burn flamewar : /

                                                                            2. 2

                                                                              ARM will be the first player to fall to RISC-V because they have a monopoly on lower end chips.

                                                                              They actually don’t. A good chunk of the chip market is 8-16 bitters. Billions of dollars worth. In the 32-bit category, there’s a lot of players licensing IP and selling chips. ARM has stuff from low end all the way up to smartphones with piles of proven I.P. plus great brand, ecosystem, and market share. They’re not going anywhere any time soon. MIPS is still selling lots of stuff in low-end devices including 32-bit versions of MCU’s. Cavium used them for Octeon I-III’s for high-performance networking with offload engines.

                                                                              With most of these, you’d get working hardware, all the peripherals you need, toolchain, books/training on it, lots of existing libraries/code, big company to support you, and maybe someone to sue if the I.P. didn’t work. RISC-V doesn’t have all that yet. Most big companies who aren’t backers… which are most big companies in this space… won’t use it without a larger subset of that or all of that depending on company. I’m keeping hopes up for SiFi’s new I.P. but even it probably has to be licensed for big money. If paying an arm and a leg, many will choose to pay the company known to deliver.

                                                                              From what I see, ARM’s marketing people or whatever are just reacting to a new development that’s in the news a lot. There some threat to their revenues given some big companies are showing interest in RISC-V. So, they’re slamming the competition and protecting their own brand. Just business news or ops as usual.

                                                                              1. 3

                                                                                The 16 bit category has been almost totally annihilated by small 32-bit designs. The 8-bit category will stands.

                                                                                (I’m also deeply doubtful of RISC-V while hardware beyond SiFive suffers critical existence failure, but that remains to be seen…)

                                                                                1. 2

                                                                                  ARM will be the first player [large monopoly] to fall [lose lots of market-share] to RISC-V because they have a monopoly on lower end chips.

                                                                                  Argh, I thought “fall” was too strong a choice of words while writing this, I should’ve listened to myself.

                                                                                  My line of thought was that it’s really hard to create a competitive server platform, as evidenced by the niche market SPARC, OpenPOWER, and ARM occupy in the server space. However, there are plenty of low-power, low-complexity ARM cores out there that are up for grabs. I’m hoping that Samsung, Qualcomm, and other RISC-V backers are supporting RISC-V in hopes that they can take their CPU designs in-house and cut ARM out of the equation.

                                                                                  I am largely ignorant of the (actual) lower-end chip market, thanks for the insight.

                                                                                  With most of these, you’d get working hardware, all the peripherals you need, toolchain, books/training on it, lots of existing libraries/code, big company to support you, and maybe someone to sue if the I.P. didn’t work. RISC-V doesn’t have all that yet.

                                                                                  The RISC-V foundation was very intentional in their licensing and wanted to ensure that designers and manufactures would have plenty of secret sauce they could layer on top of the core spec. This is one of the reasons OpenSPARC failed and why so many different frenemies are collaborating on RISC-V.

                                                                                  From what I see, ARM’s marketing people or whatever are just reacting to a new development that’s in the news a lot.

                                                                                  Their marketing people made the site, but an ARM technology fellow pitched similarly bad arguments in a debate ~2 years ago. Or maybe I’ve just drunk too much Kool Aid.

                                                                              2. 3

                                                                                I upvoted both submissions. I consciously bought Lobsters frontpage spot for RISC-V advertising and paid loss of technical content in exchange. I acknowlege other negative externalities but I think they are small. Sorry about that.

                                                                                I think RISC-V advertising is as legitimate as BSD advertising, Rust advertising, etc. here. Yes, technical advertising would have been better. I have a small suspicion of gatekeeping RISC-V (or hardware) against established topics, which you can dismiss by simply stating so in the reply.

                                                                                1. 4

                                                                                  Thanks for keeping up the effort to steer the submissions in a more cerebral direction, away from news. I totally agree with you and appreciate it.

                                                                                  1. 2

                                                                                    I almost never upvote these kind of submissions, but seeing as it can be hard to get these off the main page, maybe it could be interesting for lobsters to have some kind of merging feature that could group stories that are simply different stages of the same news into the same story, thus only blocking one spot.

                                                                                    1. 3

                                                                                      Now that is interesting. It could be some sort of chaining or hyperlinks that goes in the text field. If not done manually, the system could add it automatically in a way that was clearly attributed to the system. I say the text field so the actions next to stories or comments stay uncluttered.

                                                                                      1. 3

                                                                                        It’s been done before for huge and nasty submissions; usually done to hot takes.

                                                                                        1. 2

                                                                                          It would also allow it to act as a timeline of sorts. Done correctly I could even apply quasi automatically to tech release posts as well, making it easier to read prior discussions.

                                                                                          The main question right now would be how to handle the comments ui for those grouped stories.

                                                                                      2. 1

                                                                                        All publicity is good publicity is actually totally false.The actual saying should be something like “Not all bad publicity is bad for you if it aligns with your identity.”. Fighting OSS definitely doesn’t align with the ARM identity/ethos.

                                                                                      3. 5

                                                                                        It’s so easy to just react and click that upvote button without thinking; the score is a reflection of emotional appeal, not of this submission’s relevance. “But it’s on the front page” is also a tired argument that comes up in every discussion like this one. @friendlysock makes excellent points in his reply to you, I totally agree with him and appreciate that he takes the time to try to steer the submissions away from news. There are plenty of news sites, I don’t want another one.

                                                                                      4. 9

                                                                                        or maybe n-gate is a worthless sneer masquerading as a website that doesn’t need to be used as a referent on topical material? Especially given that literally anything posted to HN is going to be skewered there? I’m not the go-to guy on HN cheerleading (at all, in any way) but n-gate is smirky petulant crap and doesn’t exactly contribute to enlightenment on tech topics.

                                                                                        1. 11

                                                                                          worthless sneer masquerading as a website that doesn’t need to be used as a referent on topical material

                                                                                          El Reg could be described the exact same way!

                                                                                          1. 2

                                                                                            thats…..actually a good point.

                                                                                      1. 8

                                                                                        Someone who controls your network will simply drop the DNSKEY/DS records, so DNSSEC would not have provided any protection for “MyEtherwallet”. People who have already visited it were (hypothetically) protected by TLS, and people who hadn’t, would have received bogus records anyway.

                                                                                        So DNSSEC could, in an ideal setting, provide a benefit similar to HPKP, but why wait? HPKP is here now.

                                                                                        Furthermore, “DNSSEC wasn’t easy to implement” is a massive understatement.

                                                                                        1. 7

                                                                                          No, they can’t drop your DS records, because those reside in the parent TLD. They would have to also hack your domain registrar to do that.

                                                                                          1. 1

                                                                                            That’s completely wrong.

                                                                                            If someone controls your network, they don’t need to hack anyone else: They can feed you whatever they want.

                                                                                            1. 1

                                                                                              If someone controls your network, they don’t need to hack anyone else: They can feed you whatever they want.

                                                                                              That’s only true of non-DNSSEC signed records. DNSSEC is PKI allows one to cryptographically delegate authority of a zone. In practice, this means that the root zone signs the public keys of authoritative top-level domains, TLDs then sign the public keys of owners of regular domain names. These keys can then be used to sign any arbitrary DNS record. So, if you have a validating local resolver, it can use the public key of the root zone to cryptographically validate the chain of trust from ICANN down to the authoritative nameserver for a domain.

                                                                                              DNSCurve isn’t a bad idea, I think lookup privacy is a good thing and I would much prefer to trust Google or Cloudflare than my local ISP for unsigned domain names. That being said, it doesn’t fix the massive problem of computers trusting the DNS cache of a 10-year-old router controlled by malware. It’s also really unhelpful when people claim that DNSCurve is some sort of alternative to DNSSEC.

                                                                                              Seriously, go read Dan Kaminsky’s critique of the DNSCurve proposal.

                                                                                              1. 1

                                                                                                DNSSEC is PKI allows one to cryptographically delegate authority of a zone

                                                                                                Which an attacker guarantees you’ll never see.

                                                                                                This isn’t a hypothetical attack: Your computer asks your ISP’s nameservers, and it strips out all the DNSSEC records. Unless your computer expects those records, it won’t ever be able to tell you anything is wrong.

                                                                                                if you have a validating local resolver, it can use the public key of the root zone to cryptographically validate the chain of trust from ICANN down to the authoritative nameserver for a domain.

                                                                                                If you don’t, and for some reason use a “validating local resolver” on another machine, you have nothing.

                                                                                                Even if you have a validating-capable resolver, and you never see that com has DS record 30909 8 2 E2D3C916F6DEEAC73294E8268FB5885044A833FC5459588F4A9184CF C41A5766 then you might never learn that you might find keys for cloudflare.com.

                                                                                                Even if you do have a validating-capable resolver, and you see that com has DS record 30909 8 2 E2D3C916F6DEEAC73294E8268FB5885044A833FC5459588F4A9184CF C41A5766 you still can’t visit google.com safely.

                                                                                                And what about .ae? Or other roots?

                                                                                                DNSSEC supporters are happy enough to ignore the problem of deploying DNSSEC, like it’s somehow someone else’s problem.

                                                                                                That being said, it doesn’t fix the massive problem of computers trusting the DNS cache of a 10-year-old router controlled by malware.

                                                                                                What are you talking about?

                                                                                                It’s also really unhelpful when people claim that DNSCurve is some sort of alternative to DNSSEC.

                                                                                                It’s annoying that DNSSEC “supporters” hand-wave the fact that DNSSEC has no security, and doesn’t have a deployment plan except “do it”.

                                                                                                IPv6 is at 23% deployment. After more than twenty years. DNSSEC is something like 0.5% of the dot-com. After more than twenty years (although admittedly they completely changed what DNSSEC was several times in that time). DNSSEC isn’t a real thing. It’s not even a spec for a real thing. How can I possibly take it seriously?

                                                                                                Seriously, go read Dan Kaminsky’s critique of the DNSCurve proposal.

                                                                                                Have you read it?

                                                                                                It’s bonkers. It admits DNSSEC is a moving target that hasn’t yet been implemented “in all its glory” and puts this future fantasy version of DNSSEC that has been fully deployed and had all operating systems, routers and applications rewritten, against DNSCurve.

                                                                                                Kaminsky is as brain-damaged as those IPv6 nutters, waiting for some magic moment for over twenty years that simply never came – and the only way his “critique” would have any value at all is if it were printed on bog roll.

                                                                                                For what it’s worth: I think DNSCurve solves a problem I don’t have, but it attracts no ire from me.

                                                                                                1. 3

                                                                                                  This isn’t a hypothetical attack: Your computer asks your ISP’s nameservers, and it strips out all the DNSSEC records. Unless your computer expects those records, it won’t ever be able to tell you anything is wrong.

                                                                                                  A resolver can refuse to perform DNSSEC validation or even strip out records, but a local resolver can detect this and even work around it.

                                                                                                  if you have a validating local resolver, it can use the public key of the root zone to cryptographically validate the chain of trust from ICANN down to the authoritative nameserver for a domain.

                                                                                                  If you don’t, and for some reason use a “validating local resolver” on another machine, you have nothing.

                                                                                                  What do you mean by using a validating local resolver on another machine? It’s local, there is no other machine.

                                                                                                  If you are saying that most clients rely on their router (or whatever) to do DNSSEC validation then yes, that router can perform a MITM attack. It’s still more secure than trusting every single upstream DNS resolver, but we need to move to local validation. The caching layer provided by DNS is a byproduct of the limited computing resources 1985.

                                                                                                  Even if you have a validating-capable resolver, and you never see that com has DS record 30909 8 2 E2D3C916F6DEEAC73294E8268FB5885044A833FC5459588F4A9184CF C41A5766 then you might never learn that you might find keys for cloudflare.com.

                                                                                                  It sounds like you are describing a broken resolver.

                                                                                                  Even if you do have a validating-capable resolver, and you see that com has DS record 30909 8 2 E2D3C916F6DEEAC73294E8268FB5885044A833FC5459588F4A9184CF C41A5766 you still can’t visit google.com safely.

                                                                                                  I believe the local resolver would just ask com for a DS record for google.com and receive either a DS record or an NSEC record. If it doesn’t receive one of those two records, then you are correct: you can’t visit google.com safely. It’s no different than an HTTPS downgrade attack.

                                                                                                  And what about .ae? Or other roots?

                                                                                                  1. We can deploy DNSSEC incrementally.
                                                                                                  2. ~96% of all domains are registered on a TLD that supports DNSSEC. The stats would probably be even better if they were based on traffic instead of total domains.
                                                                                                  3. All registrars are required to support DNSSEC.

                                                                                                  If we can get people to stop claiming that “DNSSEC does nothing for security” and make use of the cool stuff you can do with DNSSEC, then the market will force the last 10% of ccTLDs to adopt it.

                                                                                                  DNSSEC supporters are happy enough to ignore the problem of deploying DNSSEC, like it’s somehow someone else’s problem.

                                                                                                  I personally am working very, very hard on addressing every pain point there is. There are a lot of moving pieces and the standards left some holes until recently. I believe captive portals and VPN domains are thorny issues, but these issues can be addressed in an incremental fashion.

                                                                                                  It doesn’t help when people make erroneous claims about DNSSEC based on an incorrect understanding of DNS, DNSSEC, DNSCurve, and decentralized naming systems.

                                                                                                  That being said, it doesn’t fix the massive problem of computers trusting the DNS cache of a 10-year-old router controlled by malware.

                                                                                                  What are you talking about?

                                                                                                  DNSCurve relies on trusting the DNS resolver above you. For most people that is a 10-year-old router which has never gotten a security update. Best-case scenario is someone switching to Google DNS or Cloudflare - but with proper encryption, no upstream resolver would be capable of performing MITM attacks.

                                                                                                  It’s annoying that DNSSEC “supporters” hand-wave the fact that DNSSEC has no security

                                                                                                  I have patiently responded to every single claim you have made about DNSSEC’s security model. Please refrain from repeating this claim until you have figured out how a MITM attacker can force a local validating resolver to accept forged DS or NSEC records.

                                                                                                  IPv6 is at 23% deployment. After more than twenty years. DNSSEC is something like 0.5% of the dot-com. After more than twenty years (although admittedly they completely changed what DNSSEC was several times in that time). DNSSEC isn’t a real thing. It’s not even a spec for a real thing. How can I possibly take it seriously?

                                                                                                  So was IPv6 until ~6 years ago - now there is exponential growth. DNSSEC is at a similar tipping point: the basic security model was worked out a long time ago, but there were plenty of sharp corners until recently (large key sizes, NSEC3, etc). If we can stop people from claiming that the security model is broken then Cloudflare and other big providers will pour money into taking business away from the HTTPS certificate authorities.

                                                                                                  It’s also a necessity for decentralized DNS, which gives us an environment where we can implement everything without having to wait for legacy infrastructure to catch up.

                                                                                                  Seriously, go read Dan Kaminsky’s critique of the DNSCurve proposal.

                                                                                                  It’s bonkers. It admits DNSSEC is a moving target that hasn’t yet been implemented “in all its glory” and puts this future fantasy version of DNSSEC that has been fully deployed and had all operating systems, routers and applications rewritten, against DNSCurve.

                                                                                                  The post is mainly useful for explaining how DNSSEC and DNSCurve relate to one another. While the grand vision is the eventual goal, there are incremental benefits and huge gains can be had by simply making the application DNSSEC aware. For example, browsers are already switching to doing DNS resolution themselves, so the work required isn’t much more involved than that of upgrading to TLS.

                                                                                                  For what it’s worth: I think DNSCurve solves a problem I don’t have, but it attracts no ire from me.

                                                                                                  Then why hate on DNSSEC but evangelize DNSCurve? You can happily ignore DNSSEC as an end user or even as a system admin. If you care about security, well, that’s a different story.

                                                                                              2. 0

                                                                                                If they hack your primary nameserver and keep the zone signed, then maybe, as long as you’re running the primary. But as the original commenter said, “they can drop your DNSKEYs and DS”, no. They can drop the DNSKEY, but the DS resides in the parent node, and as long as it’s there, resolvers will look for DNSSEC validated responses, which they won’t get.

                                                                                                1. 1

                                                                                                  If someone controls your network, whenever you request a DNSSEC “protected” domain, you will never know because the attacker can drop whatever records they want. DNSSEC clearly offers nothing.

                                                                                                  If someone controls the network of a website, they don’t need to interfere with the nameservers. They can simply MITM the traffic. Since they can request a TLS certificate from anyone who does HTTP or mail validation, DNSSEC still offers nothing. This is true whether they control the network by broadcasting “invalid” BGP routes, or whether they attack the physical infrastructure.

                                                                                                  Why are you defending this snake oil? “Hack[ing] your primary nameserver” is a pointless strawman that nobody cares about: “your primary nameserver” is likely controlled by Amazon or someone else competent. Your webserver is controlled by you, who lack the experience to identify the (complex) services at risk and properly secure them.

                                                                                                  1. 2

                                                                                                    f someone controls your network, whenever you request a DNSSEC “protected” domain, you will never know because the attacker can drop whatever records they want. DNSSEC clearly offers nothing.

                                                                                                    I don’t know what you mean by this. For starters, are you assuming “your network” includes all the nameservers? Let’s assume so. If DNSSEC is enabled, they can’t alter any of the DNS responses because they will break DNSSEC validation for aware resolvers. Sure, they can drop queries but what does that buy them other than a DDOS? They can’t stand up a fake site.

                                                                                                    Why are you defending this snake oil? “Hack[ing] your primary nameserver” is a pointless strawman that nobody cares about: “your primary nameserver” is likely controlled by Amazon or someone else competent. Your webserver is controlled by you, who lack the experience to identify the (complex) services at risk and properly secure them.

                                                                                                    Clearly you haven’t been paying attention. The entire chain of events that started my posts about this was a BGP hijack that was used to impersonate Route53 nameservers by hijacking Amazon IP space, which were the nameservers that MyEtherWallet was using. From there they stood up fake nameservers which directed victims to a fake myetherwallet site. That’s the exactly what happened, why don’t you go tell the people who had their wallet’s drained not to worry about it because it is all a pointless strawman.

                                                                                            2. 2

                                                                                              So DNSSEC could, in an ideal setting, provide a benefit similar to HPKP, but why wait?

                                                                                              Doing PKI at the DNS level means that we can leverage it for every network and application protocol … including public key pinning. It would also enable us to do cool stuff like encrypt at both the network and application layers.

                                                                                              It’s just that sysadmins didn’t want the headache of key management, so everyone engaged in bikeshedding. It didn’t help that a few (well intentioned!) security researchers threw shade on DNSSEC for not solving Zooko’s triangle or offering encryption for DNS lookups.

                                                                                              So now we are stuck with Mozilla funding Let’s Encrypt to the tune of $2 million/year and non-HTTPS applications are forced to replicated all of the infrastructure required for a PKI. Which, in practice, means that it’s either non-existent (SSH) or barely functioning (GPG).

                                                                                              HPKP is here now.

                                                                                              Sadly, HPKP has been deprecated by Chrome. But, FWIW, these standards existed long before HPKP.

                                                                                              1. 2

                                                                                                It didn’t help that a few (well intentioned!) security researchers threw shade on DNSSEC for not solving Zooko’s triangle or offering encryption for DNS lookups.

                                                                                                The current system (CA’s) is human meaningful, secure, and decentralized federated. It’s not perfect, but there are ways to improve the last point, so that we have more control over badly behaving CAs. But even as implemented, that’s better than human meaningful, secure, and a single point of failure (DNSSEC).

                                                                                                So now we are stuck with Mozilla funding Let’s Encrypt to the tune of $2 million/year and non-HTTPS applications are forced to replicated all of the infrastructure required for a PKI.

                                                                                                You can use x509 certificates from Let’s Encrypt to secure any IP connection. What’s the problem?

                                                                                                1. 1

                                                                                                  It’s not perfect, but there are ways to improve the last point, so that we have more control over badly behaving CAs.

                                                                                                  For non-decentralized naming systems, the (abstract) DNSSEC chain of trust looks (roughly) like this:

                                                                                                  Government -> ICANN -> Registrar -> DNS Provider -> Local Validating Resolver -> Browser
                                                                                                  

                                                                                                  HTTPS certificate authorities “validate” control over a domain by checking DNS records (either TXT or via an email). Their chain of trust looks like this:

                                                                                                  Government -> ICANN -> Registrar -> DNS Provider -> ~650 CAs [1] -> Browser
                                                                                                  

                                                                                                  The best way to exercise more control over them is to cut them out of the trust chain entirely. Or switch to a decentralized naming system … which also relies on DNS (and thus DNSSEC) for compatibility reasons:

                                                                                                  Blockchain -> Lightclient w/ DNSSEC auto-signer -> Browser
                                                                                                  

                                                                                                  But even as implemented, that’s better than human meaningful, secure, and a single point of failure (DNSSEC).

                                                                                                  In terms of the security model, DNS is still a single point of failure. If you don’t like managing PKI you can always outsource it to someone … just like you do with HTTPS certificates.

                                                                                                  1. 1

                                                                                                    If I want to compromise you, attacking your DNS resolver doesn’t mean I’ve also attacked PayPal’s CA even if they used their DNS resolver to verify ownership of paypal.com

                                                                                                    1. 1

                                                                                                      My point is that one can trick one of the ~650 CAs into generating an X509 certificate by hacking their upstream DNS client or performing a MitM attack. This would be pretty easy for any large network operator to pull off.

                                                                                                2. 1

                                                                                                  Doing PKI at the DNS level means that we can leverage it for every network and application protocol … including public key pinning. It would also enable us to do cool stuff like encrypt at both the network and application layers.

                                                                                                  What exactly are you referring to: DNSCurve?

                                                                                                  DNSSEC doesn’t offer anything like this.

                                                                                                  It’s just that sysadmins didn’t want the headache of key management, so everyone engaged in bikeshedding.

                                                                                                  Paul Vixie, June 1995: “This sounds simple but it has deep reaching consequences in both the protocol and the implementation – which is why it’s taken more than a year to choose a security model and design a solution. We expect it to be another year before DNSSEC is in wide use on the leading edge, and at least a year after that before its use is commonplace on the Internet”

                                                                                                  Paul Vixie, November 2002: “We are still doing basic research on what kind of data model will work for DNS security. After three or four times of saying NOW we’ve got it THIS TIME for sure there’s finally some humility in the picture … Wonder if THIS’ll work? … It’s impossible to know how many more flag days we’ll have before it’s safe to burn ROMs … It sure isn’t plain old SIG+KEY, and it sure isn’t DS as currently specified. When will it be? We don’t know… There is no installed base. We’re starting from scratch.”

                                                                                                  It didn’t help that a few (well intentioned!) security researchers threw shade on DNSSEC for not solving Zooko’s triangle or offering encryption for DNS lookups.

                                                                                                  Or the fact DNSSEC creates DDOS opportunities, introduced lots of bugs in the already buggy BIND, and still offers no real security.

                                                                                                  No thanks.

                                                                                                  So now we are stuck with Mozilla funding Let’s Encrypt to the tune of $2 million/year

                                                                                                  DNSSEC has received millions of US tax dollars offers nothing, while Let’s Encrypt actually provides some transport security. Hrm…

                                                                                                  non-HTTPS applications are forced to replicated all of the infrastructure required for a PKI. Which, in practice, means that it’s either non-existent (SSH) or barely functioning (GPG).

                                                                                                  I don’t see how DNSSEC even begins to solve these problems.

                                                                                                  FWIW: Almost everything is HTTPS anyway.

                                                                                                  Sadly, HPKP has been deprecated by Chrome

                                                                                                  It is sad. Firefox and others still support it, and HSTS + Certificate Transparency is probably good enough anyway.

                                                                                                  1. 1

                                                                                                    Doing PKI at the DNS level means that we can leverage it for every network and application protocol … including public key pinning. It would also enable us to do cool stuff like encrypt at both the network and application layers.

                                                                                                    What exactly are you referring to: DNSCurve?

                                                                                                    No, using DNSSEC to bootstrap the public keys for … any cryptographic protocol. Just as DANE can be used to distribute the TLS keys for an HTTPS server, SSHFP records can be used to publish the public keys for a given SSH server. AWS, for example, could just publish SSHFP records when they provision a new instance and you would have end-to-end verification for your SSH connection. No need for Amazon to partner with Let’s Encrypt or force SSH clients to switch to X509 certificates.

                                                                                                    Since DNSSEC makes it simple to publish arbitrary public keys for a domain, you can use something like TCPCrypt to encrypt connections at the transport level. Transport level encryption reduces information leakage (SNI headers for HTTPS, what application you are using, network level “domain” fronting, etc) and mitigates flaws in any application layer encryption.

                                                                                                    WRT to your Paul Vixie quotes: they are 16 years old. I’ve tried really hard to find showstopper issues, but when you dig into criticisms of DNSSEC they boil down to complaints about DNS, problems that have already been fixed, or gripes about the complexity of managing PKI.

                                                                                                    Or the fact DNSSEC creates DDOS opportunities

                                                                                                    DNS reflection attacks are a thing because there are tens of thousands of public DNS resolvers willing to send DNS record requests to anyone. The worst offender here are ANY requests that return all records associated with a domain. The public key and signature used to verify a DNS response do not incur that much overhead 1 2.

                                                                                                    The response from DNS providers hasn’t been to rip out DNSSEC, but to rate limit requests that produce large responses. More fundamental changes include switching to TCP, ingress filtering of spoofed UDP packets, supporting edns_client_subnet, and shutting down public DNS servers.

                                                                                                    introduced lots of bugs in the already buggy BIND

                                                                                                    Please do not blame DNSSEC for BIND being a buggy POS.

                                                                                                    and still offers no real security.

                                                                                                    DNSSEC prevents a wide range of attacks. Are seriously arguing that removing trust in every DNS server between yourself and the registrar doesn’t materialistically improve security? What about removing trust in the ~650 CAs capable of producing an HTTPS certificate? Wouldn’t you like to live in a world where TCP, SSH, email, IRC, etc can take advantage of PKI instead of opportunistic crypto?

                                                                                                    non-HTTPS applications are forced to replicated all of the infrastructure required for a PKI. Which, in practice, means that it’s either non-existent (SSH) or barely functioning (GPG).

                                                                                                    I don’t see how DNSSEC even begins to solve these problems.

                                                                                                    Publish a DNS record with the public key for the encryption protocol you would like to use (see: SSHFP, DANE, PGP).

                                                                                                    It is sad. Firefox and others still support it, and HSTS + Certificate Transparency is probably good enough anyway.

                                                                                                    As a decentralized domain name nerd, I strongly disagree. We need a standard way for naming systems to declare the public keys for their services. Seriously, we have to sign TOR domains with HTTPS certificates from DigiCert because the browser doesn’t support DANE.

                                                                                                    1. 1

                                                                                                      No, using DNSSEC to bootstrap the public keys for … any cryptographic protocol.

                                                                                                      This is some fantasy version of DNSSEC that doesn’t exist yet and likely never will: Browers don’t do DANE because it’d piss people off.

                                                                                                      Are seriously arguing that removing trust in every DNS server between yourself and the registrar doesn’t materialistically improve security?

                                                                                                      Yes.

                                                                                                      Until you know that you’re supposed to be seeing a DS/DNSKEY chain, every recursive resolver (and every stub resolver) gains nothing, and risks tricking people into thinking they have some security because they installed something called DNSSEC.

                                                                                                      As a decentralized domain name nerd, I strongly disagree.

                                                                                                      Well, you’re wrong. Decentralising trust just creates multiple single-points of failure unless you’re willing to wait for consensus, in which case you might as well use HSTS+Certificate Transparency (and your favourite mirror).

                                                                                                      1. 1

                                                                                                        This is some fantasy version of DNSSEC that doesn’t exist yet and likely never will: Browers don’t do DANE because it’d piss people off.

                                                                                                        It would only piss off people who think DNSSEC is a bad thing. Chrome actually implemented it but it was removed due to lack of critical mass. I’m thinking of pitching Cloudflare on pushing for DANE.

                                                                                                        Until you know that you’re supposed to be seeing a DS/DNSKEY chain, every recursive resolver (and every stub resolver) gains nothing, and risks tricking people into thinking they have some security because they installed something called DNSSEC.

                                                                                                        If the parent zone is signed and has a DS key for the child zone, then your local resolver would know that the child zone is supposed to be signed.

                                                                                                        As a decentralized domain name nerd, I strongly disagree.

                                                                                                        Well, you’re wrong.

                                                                                                        No, I’m not. This was a major issue with Namecoin: we had to MITM every HTTPS connection to check the certificate against the blockchain records then replace it with a local certificate. There was no uniform way of making this work: the hack required tweaking for every OS and application and prevented users from selecting their own SOCKS5 proxy. The entire team agreed that DANE was the only way forward and we even got DigiCert to ensure that they used DANE when minting their .onion certs.

                                                                                                        Decentralising trust just creates multiple single-points of failure

                                                                                                        Um, what?

                                                                                                        unless you’re willing to wait for consensus

                                                                                                        Consensus from the Blockchain?

                                                                                              1. 4

                                                                                                Most of the large resolver services such as Google. Quad9, OpenDNS and Cloudflare are all DNSSEC enabled.

                                                                                                OpenDNS does not support DNSSEC at this time.

                                                                                                1. 1

                                                                                                  Whoah. I could have sworn they did.

                                                                                                  1. 6

                                                                                                    OpenDNS does support DNSCurve, a protocol that faster, simpler, offers real incremental benefits (unlike DNSSEC which is all-or-nothing), and is easier to deploy than DNSSEC.

                                                                                                    DNSCurve however is unlikely to be implemented by ICANN for political reasons.

                                                                                                    1. 12

                                                                                                      I’m getting really sick of people talking about DNSCurve as if it was an alternative to DNSSEC, the two do completely different things. DNSCurve secures the traffic between the authoritative nameserver and a DNSCurve enabled resolver (the only one I know of being OpenDNS). DNSSEC authenticates the validity of the DNS responses themselves.

                                                                                                      1. 2

                                                                                                        You’re right. DNSCurve protects all of your DNS traffic from tampering between you and the resolver and the resolver can implement their own security infrastructure for detecting and protecting from tampering and cache poisoning, while DNSSEC validates the DNS traffic of far less than 1% of all domains on the internet AND requires each client to not use a caching resolver if they want to be able to trust the results.

                                                                                                        So DNSCurve has a real world impact on protecting users, and DNSSEC is still vaporware.

                                                                                                        1. 0

                                                                                                          You’re right. DNSCurve protects all of your DNS traffic from tampering between you and the resolver and the resolver can implement their own security infrastructure for detecting and protecting from tampering and cache poisoning.

                                                                                                          The only way to reliably detect against tampering with DNS records is for the owner of the domain to sign them cryptographically.

                                                                                                          while DNSSEC validates the DNS traffic of far less than 1% of all domains on the internet

                                                                                                          IPv6 had pretty low adoption too, until things started to get bad enough.

                                                                                                          AND requires each client to not use a caching resolver if they want to be able to trust the results.

                                                                                                          They don’t have to use a caching resolver, that’s the whole point of the owner of the domain signing the DNS records - you can verify it for yourself! A client can also query the root name servers directly, it would increase load on the authoritative nameservers and has a different privacy profile … but there is nothing wrong with that.

                                                                                                          1. 2
                                                                                                            • DNSCrypt for privacy
                                                                                                            • TLS for validation (and of course the other security benefits that come with)

                                                                                                            That’s the stack everyone should be using. It works and is reliable; duplicate certificates forged by shady CAs is a thing of the past with CAA, certificate transparency, and the ability to do stapling. Pushing down the validation to DNS is the wrong approach.

                                                                                                            1. 0
                                                                                                              • DNSCrypt for privacy

                                                                                                              DNSCrypt is dead. TLS adopted DJB’s curves and some IETF working groups wrote a few standards which can pass through firewalls.

                                                                                                              • TLS for validation (and of course the other security benefits that come with)

                                                                                                              TLS does not validate that a record came from a domain name name, it only validates that the response came from a third party resolver.

                                                                                                              That’s the stack everyone should be using. It works and is reliable; duplicate certificates forged by shady CAs is a thing of the past with CAA, certificate transparency, and the ability to do stapling.

                                                                                                              We still have to sign .onion domains using DigiCert’s certificate. Certificate transparency doesn’t protect against domains which aren’t using “high assurance” certificates. Nor does this protect any other protocol outside of TLS.

                                                                                                              Pushing down the validation to DNS is the wrong approach.

                                                                                                              Why are you so against cryptographic verification of DNS records?

                                                                                                      2. 2

                                                                                                        DJB threw a bunch of shade on DNSSEC when he announced DNSCurve, but he was (at best) misguided 1.

                                                                                                        DNSCurve however is unlikely to be implemented by ICANN for political reasons.

                                                                                                        DNSCurve doesn’t have anything for ICANN to implement as there is no signing of DNS records. It will validate that a response came from a specific DNS cache, but not that the records were produced by the owner of the domain.

                                                                                                        He’s right that we need privacy for DNS lookup and the adults in room created DNS over TLS.

                                                                                                      3. 0

                                                                                                        OpenDNS doesn’t support DNSSEC, and prevents doing the validation yourself if you wanted to do so, by stripping required records before forwarding a response to you. 1

                                                                                                        Their business model used to rely on NXDOMAIN hijacking, which DNSSEC prevents. They stopped doing that a while ago, but I just checked and they are still stripping out DNSSEC records 🤯!

                                                                                                        I really wish I hadn’t gotten sick, I was going to help work on a standard for DNS filtering. At any rate, these are bad actors in the DNS ecosystem.

                                                                                                    1. 2

                                                                                                      DNSSEC as a standard is just really badly done. The closest comparison is OpenPGP (as it relates to emails), technically both standards want to increase security but due to design issues, both spectacularly failed to improve the status quo.

                                                                                                      It’s good that DNSSEC hasn’t been deployed widely enough for it to “fail-close”.

                                                                                                      1. 1

                                                                                                        What would you like to change?

                                                                                                      1. 5

                                                                                                        DNSSEC would be cool if it allowed multiple CA hierarchies. I really like the idea of a global key/value store that you bootstrap other secure protocols on top of. DNS could be the basis for that in theory.

                                                                                                        But DNSSEC as standardized bubbles up to a single government run CA per TLD, and that’s much less cool.

                                                                                                        1. 3

                                                                                                          Not liking government control of the domain name system is not a valid reason to dislike cryptographic verification of DNS records … especially since decentralized naming systems also need a protocol for signing their DNS records :D