1. 7

    lobste.rs is the right place for fantastic writeups like this. http://cve.mitre.org/ is the right place for CVEs.

    1. 33

      I don’t really think that you should be allowed to ask the users the sign a new EULA for security patches. You fucked up. People are being damaged by your fuck up and you should not use that as leverage to make the users do what you want so they can stop your fuck up from damaging them further.

      Patches only count if they come with the same EULA as the original hardware/software/product.

      1. 9

        Sure - you’re welcome to refuse the EULA and take your processor back to the retailer, claiming it is faulty. When they refuse, file a claim in court.

        Freedom!

        1. 6

          This suggestion reminds me of the historical floating point division bug. See https://en.m.wikipedia.org/wiki/Pentium_FDIV_bug

          There was a debate about the mishandling by Intel. Also, there was debate over “real-world impact,” estimates were all over the charts.

          Here, it seems that the impact is SO big, that almost any user of the chip can demonstrate significant performance loss. This might become even bigger than the FDIV bug.

          1. 4

            They are being sued by over 30 groups (find “Litigation related to Security Vulnerabilities”). It already is.

            As of February 15, 2018, 30 customer class action lawsuits and two securities class action lawsuits have been filed. The customer class action plaintiffs, who purport to represent various classes of end users of our products, generally claim to have been harmed by Intel’s actions and/or omissions in connection with the security vulnerabilities and assert a variety of common law and statutory claims seeking monetary damages and equitable relief. The securities class action plaintiffs, who purport to represent classes of acquirers of Intel stock between July 27, 2017 and January 4, 2018, generally allege that Intel and certain officers violated securities laws by making statements about Intel’s products and internal controls that were revealed to be false or misleading by the disclosure of the security vulnerabilities […]

            As for replacing defective processors, I’d be shocked. They can handwave enough away with their microcode updates because the source is not publicly auditable.

            1. 1

              The defense could try to get the people who are discovering these vulnerabilities in on the process to review the fixes. They’d probably have to do it under some kind of NDA which itself might be negotiable given a court is involved. Otherwise, someone who is not actively doing CPU breaks but did before can look at it. If it’s crap, they can say so citing independent evidence of why. If it’s not, they can say that, too. Best case is they even have an exploit for it to go with their claim.

        2. 4

          I don’t really think that you should be allowed to ask the users the sign a new EULA for security patches.

          A variation of this argument goes that security issues should be backported or patched without also including new features. It is not a new or resolved issue.

          Patches only count if they come with the same EULA as the original hardware/software/product.

          What is different here is that this microcode update also requires operating system patches and possibly firmware updates. Further not everyone considers the performance trade-off worth it: there are a class of users for whom this is not a security issue. Aggravating matters, there are OEMs that must be involved in order to patch or explicitly fail to patch this issue. Intel had to coordinate all of this, under embargo.

          1. 2

            This reminds me of HP issuing a “security” update for printers that actually caused the printer to reject any third-party ink. Disgusting.

            1. 2

              I had not considered the case where manufacturers and end-users have different and divergent security needs.

              1. 2

                It’s worth thinking on more broadly since it’s the second-largest driver of insecurity. Demand being the first.

                The easiest example is mobile phones. The revenue stream almost entirely comes from sales of new phones. So, they want to put their value proposition and efforts into the newest phones. They also want to keep costs as low as they can legally get away with. Securing older phones, even patching them, is an extra expense or just activity that doesn’t drive new phone sales. It might even slow them. So, they stop doing security updates on phones fairly quickly as extra incentive for people to buy new phones which helps CEO’s hit their goalposts in sales.

                The earliest form I know of was software companies intentionally making broken software when they could spend a little more to make it better. Although I thought CTO’s were being suckers, Roger Schell (co-founder of INFOSEC) found out otherwise when meeting a diverse array of them under Black Forrest Group. When he evangelized high-assurance systems, the CTO’s told him they believed they’d never be able to buy them from the private sector even though they were interested in them. They elaborated that they believed computer manufacturers and software suppliers were intentionally keeping quality low to force them to buy support and future product releases. Put/leave bugs in on purpose now, get paid again later to take them out, and force new features in for lock-in.

                They hit the nail on the head. Biggest examples being IBM, Microsoft, and Oracle. Companies are keeping defects in products in every unregulated sub-field of IT to this day. It should be default assumption with default mitigation being open API’s and data formats so one can switch vendors if encountering a malicious one.

                EDIT: Come to think of it, the hosting industry does the same stuff. The sites, VPS’s, and dedi’s cost money to operate in a highly-competitive space. Assuming they aren’t loss-leaders, I bet profitability on the $5-10 VM’s might get down to nickles or quarters rather than dollars. There’s been products on market touting strong security like LynxSecure with Linux VM’s. The last time I saw price of separation kernels w/ networking and filesystems it was maybe $50,000. Some supplier might take that a year per organization just to get more business. They all heavily promote the stuff. Yet, almost all hosts use KVM or Xen. Aside from features, I bet the fact that they’re free with commoditized support and training factors into that a lot. Every dollar in initial profit you make on your VM’s or servers can further feed into the business’s growth or workers’ pay. Most hosts won’t pay even a few grand for a VMM with open solutions available, much less $50,000. They’ll also trade features against security like management advantages and ecosystem of popular solutions. I’m not saying any of this is bad choices given how demand side works: just that the business model incentivizes against security-focused solutions that currently exist.

          2. 1

            I think you have to be presented with the EULA before purchase for it to be valid anyway

          1. 3

            Nice hack! One has to wonder, though, how it would work with Postgres.

            We had this case, albeit years ago on an olden version of MySQL, where a slightly more complex JOIN with aggregate action, but less data than in the post, would not be “web-scale”.

            I tested it rogue with Postgres, for which it was a piece of piss, but because we were stuck with MySQL in production, we had to implement denormalization through the app :(

            1. 2

              It’s bad too, last I tried. Postgres has a hard time lifting the JOIN above the DISTINCT. I had to use subqueries with PG10 when I tried a similar “select distinct and join on the distinct rows” at work. Took the join down from a few hundred thousand tuples to less than 100. :-(

            1. 13

              Oh boy. Here, have a hopefully helpful anecdote…

              I spent a lot of time, sweat, tears, and pain deciding to “properly” learn Nix to set up armokweb, aka lobste.rs plays Dwarf Fortress. I’ve yet to do a full writeup/postmortem (although I intend to), but I finished the first iteration of that project with a mostly-positive view of NixOS from no experience whatsoever.

              First, let me get it out of the way: the documentation often sucks and you have to read other packages that do a similar thing to the package you want to write to create custom packages at all. There’s not really a “Nix package cookbook” like there should be. Nix Pills gets you part of the way there, but my perception is that the whole nixpkgs contribution process operates on screwing up and learning from maintainers who have more experience than you.

              nixpkgs is beholden to how well the community maintains it, but this is similar to Arch’s situation with the AUR. If there were fewer maintainers, the AUR would not be as useful. Some Nix packages (including the dwarf-fortress one, which I ended up contributing to) had breaking bugs. I’m not sure I’m qualified to argue for how “mature” nixpkgs already is, but it’s definitely getting better.

              On a more positive note, I like Nix’s immutable packages and declarative approach to configuration. I tested armokweb in a Nix VM, and it was near-trivial to refactor it so it ran in a container that could be included in a Nix config with two lines of code. Nix’s strong point, IMO, is that it encourages you to produce artifacts (Nix code) that can be used to reproduce a system configuration on another machine, or in a container, or in some other context within the system. Coming from Debian, CentOS, and Arch, it’s a godsend that this is included in the OS and you don’t have to separate the steps of configuration and reproducing that configuration for others to use as much as you would with something like Ansible. It’s also great that you can mix rolling and non-rolling release models all you want. Your entire system doesn’t have to be “unstable” - just the packages and maybe their dependencies that need to will be if you decide to install an “unstable” package. There are no dependency diamonds that cause apt to complain that the wrong version of libc is installed if you accidentally add the wrong mirror.

              Nix the language is sort of like a weird Python/ocaml hybrid, but the lazy evaluation approach works really well for package management. Derivations stringify to their Nix store path, which is the content address of that package (and, when they’re depended on, they get loaded, either from local build via cryptographic content-hash or from cache). It’s why the NixOS cache can exist as effectively just that, a cache that just makes installing packages go faster because they’re on a nearby server instead of needing to be rebuilt locally.

              Overall, I’m a fan of NixOS. The entirety of system configuration is filling in the blanks in a playbook that you can share with others. Packages have strong guarantees about their integrity. Nix the language is a little weird but workable once you’ve seen enough of it. Containers just use lxc and lots of hardlinks to function. I’d like to see deeper integration with ZFS for managing the Nix store. Currently, garbage collection is done at a user-facing FS level, when Sun Microsystems solved this problem years ago at the block level.

              I think there’s plenty to like about what NixOS provides, with the caveat that they can do all this better down the road. I feel like it would be really informative for them to refactor the Nix store so it’s able to take advantage of the features of ZFS. OpenIndiana already uses ZFS to manage system “generations” in a similar way to how NixOS uses symlinks and hardlinks. They would be able to make their OS better if they learned more from IllumOS, but I have no doubt that they could tackle that if they wanted to.

              1. 2

                I’ve heard lots of good things about ZFS, but it smells a bit too monolithic in my (inexperienced) opinion. I can certainly imagine the benefits of setting it up on a server, for example; but I don’t like the sound of a general purpose, publically available, userspace application like Nix relying on its functionality (i.e. forcing users to adopt ZFS just to run that particular program).

                Maybe NixOS, as a full-blown distro, could encourage use of ZFS in its installer; I think there’s still debate raging about the legalities of bundling things like that. Also, I’m perhaps a little out of touch with the NixOS installation process: I installed 16.03 in 2014 and have been upgrading rather than re-installing since then. Also my experience was a little unorthodox: I used the CD ISO image, but since I don’t have an optical drive I ran the installer from within qemu (potentially dangerous, since it was installing to the same /dev/sda drive as the host, and very slow since my CPU doesn’t have virtualisation extensions :P )

                1. 2

                  it smells a bit too monolithic in my (inexperienced) opinion

                  It definitely is, but I also wouldn’t have as many reservations about putting a NixOS root on ZFS if it could be implemented that way. At present, it’s a file level GC on top of a block level GC. It doesn’t even have to be a hard dependency, especially if it’s implemented with ZFS channel programs, which are a new feature in ZoL that lets root run Lua scripts that implement custom ZFS behavior in the kernel. Read-only mounts of snapshots are just exactly how the Nix store’s implementation currently behaves, and Nix might as well take advantage of features of the filesystem if it has them available.

                1. 1

                  It cost $25 to host for a week on AWS. Not the cheapest thing. :(

                1. 7

                  One of the things I liked about Orange Book and Common Criteria ratings were that you could represent this with the higher ratings. Each one required fewer defects, more mitigations, better recovery, and so on. If not bulletproof, you were at least assured you would be cleaning up fewer messes that should be smaller most of the time.

                  Then again, I argued for this exact dichotomy in next variant of security certification: whole system was “Insecure and Unevaluated” until each component was analyzed and rated. Overall system rating is that of weakest component in TCB. The attackers go for the low hanging fruit. They throw large numbers of people over time at stuff that’s harder to get. They consistently get exploits on the popular tech. A kit with those gets them read or write access to the system. Preventing that is the main point of system-level security. So, calling it insecure if it can’t block that is reasonable.

                  So, let’s have a quick look at your claims in terms of that:

                  “privilege separation (such as sandboxing), which either reduces the assets available to an attacker or forces them to chain additional vulnerabilities together to achieve their goals”

                  Bugs are put into mainstream software so fast that “chain additional vulnerabilities” is about meaningless except in the few cases where that’s actually hard. In one example, I read about a high school hacker chaining 5-6 bugs together in Chrome to create a vulnerability. Didn’t take long either. CompSci people regularly throw their tools at code always finding more bugs. More mainstream version of that is fuzzers. In high-assurance security, you had to take measures to ensure each root cause you cared about was provably impossible to happen. Quick examples are full validation of input, safety checks on anything that can overflow, checking for temporal errors, and source-to-object validation if worried about compiler screwing stuff up.

                  “asset depreciation (such as hashing passwords), which reduces the value of the assets an attacker can access”

                  This is an example of effective security. Specifically, it’s data security that makes the data itself less valuable when stored on an insecure system. The security dependency is whatever is putting the data in that form. It can be subverted to either disable that protection or make it look like the protection is there but actually isn’t. Alternatively, the attacker can hope to get something out of what they stole in smaller numbers. In practice, sophistication and risk combo is high enough that they do the subversion rarely to never. They almost always go for the lower-value data itself. And this working assumes defender is doing it correctly rather than rolling their own protection, using an obsolete version, or other screwup in crypto side.

                  “exploit mitigations (such as Control Flow Integrity or Content Security Policies), which make exploiting vulnerabilities harder, or impossible”

                  Which themselves get bypassed a lot. The actual result is your system is insecure with a smaller number of attackers or in a different time window. This can have value. It’s still insecure where you have to assume that data is going bye bye.

                  “detection and incident response, which allow identifying successful attackers, limiting the window of compromise”

                  The system is still insecure in this case. Detection and response are a separate topic in security with their own methods, cat and mouse game, people issues, and so on. Definitely important. It doesn’t change fact that their insecure system got compromised with data possibly working its way outward or attacks inward until they notice the breach. Then, they have to do something about it without disrupting operations while the attack is ongoing. In many cases, they’d have been better off with a system that was secure to begin with or closest one can get to it.

                  So, none of these counter the System is Insecure mindset if you’re using systems highly likely to have exploitable vulnerabilities (most). Two can mitigate some or all of the damage in some contexts. They should definitely be done. They should also be prominently noted in any description of overall, security posture to highlight the benefits. You’re still building on insecure systems instead of highly-secure alternatives. Presumably, you get benefits out of that justifying it. It doesn’t change what’s happening, though.

                  Side note: studying this stuff so long has made me want to do away with the word “secure” entirely. The impression is total protection. The reality is always highly, context dependent. Taxonomies of threats with a solution’s responses and their strength make more sense. I’ve even seen it made easy enough for lay people to follow.

                  1. 2

                    studying this stuff so long has made me want to do away with the word “secure” entirely. The impression is total protection

                    From an actual software development standpoint, security engineering is sort of a Maslow’s (Phrack’s?) Hierarchy of Needs for me. It’s all about building more moats in networked applications these days. My list looks something like this:

                    • First, make sure that your network communication is secure. If your users are sending their credentials over anything insecure, or even TLS without certificate path validation, why bother? This is basic sysadmin stuff.
                    • Next, ensure your application’s API footprint (yes, the whole thing, web server and all) doesn’t misuse user input. This is super basic OWASP Top 10 stuff and probably where most people screw up. Adopt development processes where your developers RTFM.
                    • Do a threat model for at least one part of your application. Seriously. They’re not that hard.

                    (interesting stuff is below this line)

                    • Next, assuming you have Apache indexes disabled, you aren’t SQL injecting yourself, and you’re at least hashing passwords, make sure you’re using the right crypto primitives correctly. This requires some knowledge of crypto.
                    • Look into audit logging to ensure that important user actions are being logged. This requires knowledge of your user or customer.
                    • Bake more paranoia into your application. Depending on what you’re making, this may be anti-cheat or attestation.
                    • … etc

                    The basic stuff is what everyone screws up. If your development team isn’t competent enough to get most of the basics under control (or even understand what the relevant problems are), they won’t be able to even reason about the more interesting stuff.

                  1. 6

                    OK but the tag line is asinine. As a regular user of a Linux distribution it is actually impossible for me to take the time to do a full analysis on every package I install to get work done.

                    SOME level of trust has to be there or else the whole idea of a Linux distro can’t work.

                    1. 10

                      Well, AUR specifically isn’t part of the actual Arch distro. It’s no safer than the curl | bash invocations on github.

                      1. 4

                        But it makes your wonder if there is no middle-ground between the AUR and the community repository. Have a Git{Hub,Lab,ea} repository where the community can do pull requests for new packages and updates, but the pull requests are reviewed by trusted users or developers. And then build the packages on trusted infrastructure.

                        1. 9

                          This is how the OpenBSD ports tree works. Anyone can send a new port or an update to the ports@ mailing list. It then gets tested & committed by developers.

                          In this specific instance, I think what hurt Arch here is too good tooling. The community developed a lot of automation tools that boil down third party package installs to pressing enter a bunch of times - even with all the warnings present people stopped reviewing the packages. If I recall correctly, the main point of AUR was to gather community packages then promote the popular ones (by votes) to trusted repositories - essentially the promotion to trusted repos lost meaning as everyone can install yaourt/pacaur or $pkgmgr du jour and just go on with their life.

                        2. 2

                          It’s no safer than the curl | bash invocations on github.

                          Highly disagree. Using the AUR without any supporting tools like pacaur, you’re cloning into a git repository to retrieve the PKGBUILD and supporting files, so you have the opportunity to view them. With pacaur, you’re shown the PKGBUILD at first install so you can make sure nothing’s malicious, and then you’re shown diffs when the package version updates. That’s MUCH better than curl | bash already.

                          1. 1

                            Also, while you shouldn’t rely on others to spot malicious code, the fact that the malicious modifications were spotted and reverted after about 9 hours shows that the AUR is subject to at least slightly more scrutiny than random scripts from github or elsewhere are.

                            Admittedly, it doesn’t sound like this particular attack was very sophisticated or well hidden.

                      1. 10

                        Community projects + fun like this are one of the most rewarding features of an online community. When this ends, please do post a writeup and I’ll add it to the site trivia.

                        1. 3

                          I’ll make sure to upload the world somewhere too, if it’s intact once the money runs out.

                          1. 5

                            upload the world somewhere too, if it’s intact once the money runs out.

                            Out of context, that quote feels eerily descriptive of the current state of affairs in meatspace.

                            1. 1

                              We’re currently at a pretty reasonable burn rate, so my AWS budget should last a few days.

                              I’m doing 30 minute interval snapshots in case things get borked beyond repair, so someone let me know if that happens.

                          1. 1

                            Doesn’t seem to work for me; it feels like I have a CTRL key pressed down, because hitting “l” in the Lua console clears the screen.

                            In the DF window, only my arrow keys work, and “i” for some reason - I appear to be able to designate zones, but not view units.

                            1. 2

                              There are a ton of keyboard issues that xpra is probably having a hard time handling. It might be holding a key that someone else pressed, so you might have to send both a keydown and keyup event to cancel it.

                              In any case, just restarted it to fix a DPI scaling issue with Therapist that’s been on the backlog but I haven’t been able to figure out yet.

                            1. 2

                              Moving the viewport works, including in the z axis, but various menu keys like k and a don’t seem to work for me. Mouse input worked so I was able to inspect things without k, but the mouse cant cover every function. Is this me doing something wrong or is there an issue with the keyboard input?

                              Also, I was not clear on whether each player has their own viewport rendered or, whether it is simply a standard df client rendered in the browser. i.e. If I change z levels, does that change it for every user or just for me?

                              1. 3

                                It’s shared between everyone. As for keyboard input… try toggling caps lock. That’s one area that needs some work.

                                1. 1

                                  I believe it’s literally just shared windows.

                                1. 2

                                  Oh, wow that’s pretty neat. I might install an instance somewhere on my server…

                                  Super awesome work!

                                  1. 3

                                    Thanks! In case it’s useful, here’s the full /etc/nixos/configuration.nix for the AWS instance.

                                  1. 13

                                    Creating this unholy abomination. That is, Dwarf Fortress streaming over X11 forwarded through websockets in a browser. Yes, it’s horrifying, and works better than it has any right to…

                                    It’s built with xpra, which I’ve gradually been fixing bugs in. Multiple people can spectate (or even control) the windows at once, which makes it great for continuation games (i.e. where multiple people do rotations on controlling a fortress).

                                    Input and display latency are shockingly small. Dwarf Fortress is especially well suited to xpra’s lossless compression modes, because most of the screen is the same color.

                                    Currently working on embedding it in a NixOS container so people can’t find anything too useful by poking around the filesystem with the Qt file browser embedded in Dwarf Therapist. From a security standpoint, I’m fully expecting that arbitrary code execution is possible in the container and am just trying to contain the damage if it does occur. This is a game people have built computers in, apparently

                                    Also, I’m trying to figure out a good name for the project. “Armok Web Services” maybe? ;)

                                    [edit] here’s a video of it working: https://streamable.com/uax2f - most of it is just me showing different parts of Dwarf Fortress (not very interesting) so skip around. Point is that it’s working with pretty high fidelity and low latency over the internet.

                                    1. 1

                                      Update: Wow, Nix containers are easy to get working! I’d probably still be nervous if random people from the internet had access to it, but now the limited group that’s testing it can’t read my /etc/passwd with Dwarf Therapist.

                                      The project is officially called Armok Web Services, and has a home now: https://github.com/numinit/armokweb

                                    1. 2

                                      If the concern is leaks, defense contractors like Galois might be able to get DARPA to change that policy. My idea would be a split between stuff that will definitely be FOSS and stuff that might be sensitive. Like Compartmented Mode Workstations or with Qubes, they could even isolate them in VM’s whose border color, labels, and firewall policies reflect the difference. The sensitive one would be used in a new, custom project or derivative that pulls in the open one. Whereas, the security policy wouldn’t led sensitive stuff interact with the Internet at all.

                                      Quite a few products on the market for this on top of free stuff like Qubes or Muen. One can also use multiple boxes with KVM switch if worried about breaks. Anyone with DARPA or Defense experience think this proposal has a good chance of working?

                                      1. 3

                                        I think there’s often a fair amount of misinformation regarding the creation of OSS by government contractors.

                                        Yes, contractors like Galois have to play by the rules the government sets for them, including classification. The concern of “leaks” is there, but it’s subtle: tight coupling of components that probably should be open sourced with other components that probably should be classified.

                                        So there’s a tendency to “overclassify” and lump everything together. However, companies can still make a case for why things should be open sourced - after all, they worked on it, and if the development was separate enough, it won’t be “polluted.”

                                        Galois has a history of trying to push things into the open source domain, and their reputation precedes them enough that they would probably not classify a C to Rust translator.

                                        As for your idea, they’d much prefer to just use two separate computers.

                                        1. 5

                                          As for your idea, they’d much prefer to just use two separate computers.

                                          As much as I love OSS, I would do the same thing. The risk is that person that works on both classified and non-classified projects could make a mistake an do a git push containing classified info. The solutions to problems problems is better structure, not being more careful.

                                          1. 1

                                            The CMW’s I mentioned specifically try to prevent that with labelled data and interfaces. You can’t mix them without a manual approval, reclassification, etc. One could do the same kind of thing with VM’s where a transfer required internal, human review.

                                      1. 12

                                        As someone who uses arch on all my developer machines, arch is a horrible developer OS, and I only use it because I know it better than other distros.

                                        It was good 5-10 years ago (or I was just less sensitive back then), but now pacman Syu is almost guaranteed to break or change something for the worse, so I never update, which means I can never install any new software because everything is dynamically linked against the newest library versions. And since the arch way is to be bleeding edge all the time, asking things like “is there an easy way to roll back an update because it broke a bunch of stuff and brought no improvements” gets you laughed out the door.

                                        I’m actually finding myself using windows more now, because I can easily update individual pieces of software without risking anything else breaking.

                                        @Nix people: does NixOS solve this? I believe it does but I haven’t had a good look at it yet.

                                        1. 14

                                          Yes, Nix solves the “rollback” problem, and it does it for your entire OS not just packages installed (config files and all).

                                          With Nix you can also have different versions of tools installed at the same time without the standard python3.6 python2.7 binary name thing most place do: just drop into an new nix-shell and install the one you want and in that shell that’s what you have. There is so much more. I use FreeBSD now because I just like it more in total, but I really miss Nix.

                                          EDIT: Note, FreeBSD solves the rollback problem as well, just differently. In FreeBSD if you’re using ZFS, just create a boot environment before the upgrade and if the upgrade fails, rollback to the pre-upgrade boot environment.

                                          1. 9

                                            Being a biased Arch Developer, I rarely have Arch break when updating. Sometimes I have to recompile our own C++ stack due to soname bumps but for the rest it’s stable for me.

                                            For Arch there is indeed no rollback mechanism, although we do provide an archive repository with old versions of packages. Another option would be BTRFS/ZFS snapshots. I believe the general Arch opinion is instead of rolling back fixing the actual issue at hand is more important.

                                            1. 8

                                              I believe the general Arch opinion is instead of rolling back fixing the actual issue at hand is more important.

                                              I can see some people might value that perspective. For me, I like the ability to plan when I will solve a problem. For example I upgraded to the latest CURRENT in FreeBSD the other day and it broke. But I was about to start my work day so I just rolled back and I’ll figure out when I have time to address it. As all things, depends on one’s personality what they prefer to do.

                                              1. 2

                                                For me, I like the ability to plan when I will solve a problem.

                                                But on stable distros you don’t even have that choice. Ubuntu 16.04, (and 18.04 as well I believe) ships an ncurses version that only supports up to 3 mouse buttons for ABI stability or something. So now if I want to use the scroll wheel up, I have to rebuild everything myself and maintain some makeshift local software repository.

                                                And that’s not an isolated case, from a quick glance at my $dayjob workstation, I’ve had to build locally the following: cquery, gdb, ncurses, kakoune, ninja, git, clang and other various utilities. Just because the packaged versions are ancient and missing useful features.

                                                On the other hand, I’ve never had to do any of this on my arch box because the packaged software is much closer to upstream. And if an update break things, I can also roll back from that update until I have time to fix things.

                                                1. 2

                                                  I don’t use Ubuntu and I try to avoid Linux, in general. I’m certainly not saying one should use Ubuntu.

                                                  And if an update break things, I can also roll back from that update until I have time to fix things.

                                                  Several people here said that Arch doesn’t really support rollback which is what I was responding to. If it supports rollback, great. That means you can choose when to solve a problem.

                                                  1. 1

                                                    I don’t use Ubuntu and I try to avoid Linux, in general. I’m certainly not saying one should use Ubuntu.

                                                    Ok, but that’s a problem inherent to stable distros, and it gets worse the more stable they are.

                                                    Several people here said that Arch doesn’t really support rollback

                                                    It does, pacman keeps local copies of previous versions for each package installed. If things break, you can look at the log and just let pacman install the local package.

                                                    1. 1

                                                      It does, pacman keeps local copies of previous versions for each package installed. If things break, you can look at the log and just let pacman install the local package.

                                                      Your description makes it sound like pacman doesn’t support roll backs, but you can get that behaviour if you have to and are clever enough. Those seem like very different things to me.

                                                      Also, what you said about stable distros doesn’t seem to match my experience in FreeBSD. FreeBSD is ‘stable’ however ports packages tend to be fairly up to date (or at least I rarely run into it except for a few).

                                                      1. 1

                                                        I’m almost certain any kind of “rollback” functionality in pacman is going to be less powerful than what’s in Nix, but it is very simple to rollback packages. An example transcript:

                                                        $ sudo pacman -Syu
                                                        ... some time passes, after a reboot perhaps, and PostgreSQL doesn't start
                                                        ... oops, I didn't notice that PostgreSQL got a major version bump, I don't want to deal with that right now.
                                                        $ ls /var/cache/pacman/pkg | rg postgres
                                                        ... ah, postgresql-x.(y-1) is sitting right there
                                                        $ sudo pacman -U /var/cache/pacman/pkg/postgres-x.(y-1)-x86_64.pkg.tar.xz
                                                        $ sudo systemctl start postgres
                                                        ... it's alive!
                                                        

                                                        This is all super standard, and it’s something you learn pretty quickly, and it’s documented in the wiki: https://wiki.archlinux.org/index.php/Downgrading_packages

                                                        My guess is that this is “just downgrading packages” where as “rollback” probably implies something more powerful. e.g., “rollback my system to exactly how it was before I ran the last pacman -Syu.” AFAIK, pacman does not support that, and it would be pretty tedious to actually do it if one wanted to, but it seems scriptable in limited circumstances. I’ve never wanted/needed to do that though.

                                                        (Take my claims with a grain of salt. I am a mere pacman user, not an expert.)

                                                        EDIT: Hah. That wiki page describes exactly how to do rollbacks based on date. Doesn’t seem too bad to me at all, but I didn’t know about it: https://wiki.archlinux.org/index.php/Arch_Linux_Archive#How_to_restore_all_packages_to_a_specific_date

                                          2. 12

                                            now pacman Syu is almost guaranteed to break or change something for the worse

                                            I have the opposite experience. Arch user since 2006, and updates were a bit more tricky back then, they broke stuff from time to time. Now nothing ever breaks (I run Arch on three different desktop machines and two servers, plus a bunch of VMs).

                                            I like the idea of NixOS and I have used Nix for specific software, but I have never made the jump because, well, Arch works. Also with Linux, package management has never been the worst problem, hardware support is, and the Arch guys have become pretty good at it.

                                            1. 3

                                              I have the opposite experience

                                              I wonder if the difference in experience is some behaviour you’ve picked up that others haven’t. For example, I’ve found that friend’s children end up breaking things in ways that I would never do just because I know enough about computers to never even try it.

                                              1. 2

                                                I think it’s a matter of performing Syu update often (every few days or even daily) instead of once per month. Rare updates indeed sometimes break things but when done often, it’s pretty much update and that’s it.

                                                I’m an Arch user since 6 years and there were maybe 3 times during those 6 years where something broke badly (I was unable to boot). Once it was my fault; second & third one is related to nvidia driver and Xorg incompatibility.

                                                1. 3

                                                  Rare updates indeed sometimes break things but when done often, it’s pretty much update and that’s it.

                                                  It’s sometimes also a matter of bad timing. Now every time before doing a pacman -Syu I check /r/archlinux and the forums to see if someone is complaining. If so then I tend to wait for a day or two before the devs push out updates to broken packages.

                                                2. 1

                                                  That’s entirely possible.

                                              2. 4

                                                I have quite a contrary experience, I have pacman run automated in the background every 60 minutes and all breakage I suffer is from human-induced configuration errors (such as misconfigured boot loader or fstab)

                                                1. 1

                                                  Things like Nix even allow rolling back from almost all user configuration errors.

                                                  1. 3

                                                    Would be nice, yeah, though I never understood or got Nix really. It’s a bit complicated and daunting to get started and I found the documentation to be lacking.

                                                2. 3

                                                  How often were you updating? Arch tends to work best when it’s updated often. I update daily and can’t remember the last time I had something break. If you’re using Windows, and coming back to Arch very occasionally and trying to do a huge update you may run into conflicts, but that’s just because Arch is meant to be kept rolling along.

                                                  I find Arch to be a fantastic developer system. It lets me have access to all the tools I need, and allows me to keep up the latest technology. It also has the bonus of helping me understand what my system is doing, since I have configured everything.

                                                  As for rollbacks, I use ZFS boot environments. I create one prior to every significant change such as a kernel upgrade, and that way if something did happen go wrong, and it wasn’t convenient to fix the problem right away, I know that I can always move back into the last environment and everything will be working.

                                                  1. 2

                                                    How do you configure ZFS boot environments with Arch? Or do you just mean snapshots?

                                                    1. 3

                                                      I wrote a boot environment manager zedenv. It functions similarly to beadm. You can install it from the AUR as zedenv or zedenv-git.

                                                      It integrates with a bootloader if it has a “plugin” to create boot entries, and keep multiple kernels at the same time. Right now there’s a plugin for systemdboot, and one is in the works for grub, it just needs some testing.

                                                      1. 2

                                                        Looks really useful. Might contribute a plugin for rEFInd at some point :-)

                                                        1. 1

                                                          Awesome! If you do, let me know if you need any help getting started, or if you have any feedback.

                                                          It can be used as is with any bootloader, it just means you’ll have to write the boot config by hand.

                                                1. 3

                                                  Can’t say I like arch that much. The wiki is indeed a damn good resource. But using the OS involves too much config wank and “what broke now” after updating a pile of packages you didn’t want to update but had to because you needed to install one new package.. I’d also prefer not to have to hunt down all the packages I need to get a usable system up and running. Also I never liked hunting down aur pkgbuilds. Why not just a repo?

                                                  Pacman also never made that much sense to me. Why “sync”? Why do I “sync” to search? And why do I also “sync” to install? apt install, dnf install, and pkg_add in particular make sense. pacman -Sy and -Ss and -S, not so much.

                                                  I use it and I hate it.

                                                  1. 2

                                                    Yeah, I’m experiencing a similar thing. Arch works the best of the distros I’ve tried for what I’m doing (running a VM server with video passthrough) and I use it for daily development aside from that, but I would really like something more declarative like Nix. I don’t feel like I can stand using rolling for all my packages anymore.

                                                  1. 18

                                                    I’ll add to the chorus of folks with good experiences using Archlinux. I’ve been using it for almost 10 years now, and it’s been extremely solid. I use it at home, at work, on my laptop and on my various media PCs. My wife uses it now too. There is nothing in particular that has really made me want to switch to something else. NixOS has piqued my curiosity since I read their paper many years ago, but I’ve never given it a fair shake because Arch has been so good to me, and because I tend to be a “worse is better” kind of a guy, but only when necessary. ;-) I love me some PKGBUILD!

                                                    I’m not sure I’d sing praises for the Archlinux community. I do like their trial by fire approach to things. But my personal interactions with them haven’t been a shining beacon of friendliness, I can tell you that. I never really fit in with the Archlinux community, so I tend to avoid it. But I think that’s OK.

                                                    1. 3

                                                      Arch Linux is (mostly) friendly towards experienced users that also strives towards keeping things simple. How can a distro communicate that it’s not for beginners, nor for maximalists (excluding large groups of users, on purpose) while also communicating that it is friendly?

                                                      1. 6

                                                        I don’t know how to answer your question, but I will clarify my statement. I said that my personal interactions with people in the Archlinux community have not been what I’d call “friendly.” I’m talking about conversing with people, not the official documentation materials, which I’ve found do a great job of being friendly while simultaneously being a great example of the trial-by-fire approach to learning that really works.

                                                        To be fair, I think it is very hard to mix trial-by-fire learning (which I think is one of many great ways to learn) with friendliness in an online community. It’s too easy to slip. But that’s just my belief based on what I think I understand about humans.

                                                        Like I said, I just don’t fit in with that style of communication. But there are a lot of places that I don’t fit into. And I really do think that’s OK.

                                                        EDIT: Also to add a positive note, not all of my interactions with Archlinux community members have been bad. There have been (very) good ones too. I guess it’s just much easier to focus/remember the bad. :-/

                                                        1. 3

                                                          I guess it’s just much easier to focus/remember the bad

                                                          This is a known psychological effect called Negativity Bias. In theory it protects you to from forgetting things that could harm you, and helps you notice potentially dangerous situations.

                                                          1. 2

                                                            Yeah, this happens. Mainly because too many people forget how rolling works, and the forums are tired of answering the same questions :/

                                                      1. 6

                                                        This is frankly amazing:

                                                        [lots of stuff] could have allowed access to all of Furbo’s customers video feeds, home private photos, voice messages and even toss food to their pets.

                                                        3/20/2018 - Bounty: Pet food basket (I rejected it)

                                                        We’re lucky that some people have a conscience.

                                                        1. 2

                                                          Personally I think these small language are much more exciting than big oil tankers like Rust or Swift.

                                                          I’m not familiar with either of those languages, but any idea what the author means by this? I thought Rust has been picking up quite a bit recently.

                                                          1. 10

                                                            I understood the author to be talking about the “size” of the language, not the degree of adoption.

                                                            I’m not sure that I personally agree that C is a small language, but many do belive that.

                                                            1. 3

                                                              Your involvement with rust will bias your opinion - rust team hat would be appropriate here :)

                                                              1. 11

                                                                He is right though. C’s execution model may be conceptually simple but you may need to sweat the implementation details of it, depending on what you’re doing. This doesn’t make C bad, it just raises the bar.

                                                                1. 9

                                                                  I had that opinion before Rust, and I’m certainly not speaking on behalf of the Rust team, so in my understanding, the hat is very inappropriate.

                                                                  (I’m also not making any claims about Rust’s size, in absolute terms nor relative to C)

                                                                  1. 4

                                                                    Or you can just test his claim with numbers. A full, C semantics is huge compared to something like Oberon whose grammar fits on a page or two. Forth is simpler, too. Whereas, Ada and Rust are complicated as can be.

                                                                    1. 5

                                                                      I agree that there are languages considerably smaller than C. In my view, there is a small and simple core to C that is unfortunately complicated by some gnarly details and feature creep. I’ve expressed a desire for a “better C” that does all we want from C without all the crap, and I sincerely believe we could make such a thing by taking C, stripping stuff and fixing some unfortunate design choices. The result should be the small and simple core I see in C.

                                                                      When comparing the complexity of languages, I prefer to ignore syntax (focusing on that is kinda like bickering about style; yeah I have my own style too, and I generally prefer simpler syntax). I also prefer to ignore the standard library. What I would focus on is the language semantics as well as the burden they place on implementation. I would also weigh languages against the features they provide; otherwise we’re talking apples vs oranges where one language simply makes one thing impossible or you have to “invent” that thing outside the language spec. It may look simpler to only present a floating 64-bit point numeric type, but that only increases complexity when people actually need to deal with 64-bit integers and hardware registers.

                                                                      That brings us to Oberon. Yes, the spec is short. I guess that’s mostly not because it has simple semantics, but because it lacks semantics. What is the range of integer types? Are they bignums, and if so, what happens you run out of memory trying to perform multiplication? Perhaps they have a fixed range. If so, what happens when you overflow? What happens if you divide by zero? And what happens when you dereference nil? No focking idea.

                                                                      The “spec” is one for a toy language. That is why it is so short. How long would it grow if it were properly specified? Of course you could decide that everything the spec doesn’t cover is undefined and maybe results in program termination. That would make it impossible to write robust programs that can deal with implementation limitations in varying environments (unless you have perfect static analysis). See my point about apples vs oranges.

                                                                      So the deeper question I have is: how small can you make a language with

                                                                      1. a spec that isn’t a toy spec
                                                                      2. not simply shifting complexity to the user
                                                                      3. enough of the same facilities we have in C so that we can interface with the hardware as well as write robust programs in the face of limited & changing system resources

                                                                      Scheme, Oberon, PostScript, Brainfuck, etc. don’t really give us any data points in that direction.

                                                                      1. 5

                                                                        So the deeper question I have is: how small can you make a language with

                                                                        1. a spec that isn’t a toy spec
                                                                        2. not simply shifting complexity to the user
                                                                        3. enough of the same facilities we have in C so that we can interface with the hardware as well as write robust programs in the face of limited & changing system resources

                                                                        Scheme, Oberon, PostScript, Brainfuck, etc. don’t really give us any data points in that direction.

                                                                        Good question. There are few languages with official standards (sorted by page count) that are also used in practice (well.. maybe not scheme ;>):

                                                                        1. Scheme r7rs - 88 pages - seems to be only language without useful standard library
                                                                        2. Ruby 1.8 - 341 pages
                                                                        3. Ada 95 - 582 pages
                                                                        4. Fortran 2008 - 621 pages - seems to be only language without useful standard library
                                                                        5. C11 - 701 pages
                                                                        6. EcmaScript - 885 pages
                                                                        7. Common Lisp - 1356 pages
                                                                        8. C++17 - 1623 pages

                                                                        I know that page count is poor metric, but it looks like ~600 pages should be enough :)

                                                                        1. 3

                                                                          Here are the page counts for a few other programming language standards:

                                                                          1. PL/I General purpose subset 443 pages
                                                                          2. Modula-2 800 pages - base - 707 pages, generics - 45 pages, objects - 48 pages
                                                                          3. Ada 2012 832 pages
                                                                          4. Eiffel 172 pages
                                                                          5. ISO Pascal 78 pages
                                                                          6. Jovial J73 168 pages
                                                                          1. 2

                                                                            I know that page count is poor metric, but it looks like ~600 pages should be enough :)

                                                                            Given that N1256 is 552 pages, yeah, without a doubt.. :-)

                                                                            The language proper, if we cut it off starting at “future language directions” (then followed by standard library, appendices, index, etc.) is only some 170 pages. It’s not big, but I’m sure it could be made smaller.

                                                                          2. 2

                                                                            I’ve expressed a desire for a “better C” that does all we want from C without all the crap, and I sincerely believe we could make such a thing by taking C, stripping stuff and fixing some unfortunate design choices. The result should be the small and simple core I see in C.

                                                                            That might be worth you writing up with hypothetical design. I was exploring that space as part of bootstrapping for C compilers. My design idea actually started with x86 assembler trying to design a few, high-level operations that map over it which also work on RISC CPU’s. Expressions, 64-bit scalar type, 64-bit array type, variables, stack ops, heap ops, expressions, conditionals, goto, and Scheme-like macros. Everything else should be expressable in terms of the basics with the macros or compiler extensions. The common stuff gets a custom, optimized implementation to avoid macro overhead.

                                                                            “ What I would focus on is the language semantics as well as the burden they place on implementation. “

                                                                            Interesting you arrived at that since some others and I talking verification are convinced a language design should evolve with a formal spec for that reason. It could be as simple as Abstract, State Machines or as complex as Isabelle/HOL. The point is the feature is described precisely in terms of what it does and its interaction with other features. If one can’t describe that precisely, how the hell is a complicated program using those same features going to be easy to understand or predict? As an additional example, adding a “simple, local change” show unexpected interactions or state explosion once you run the model somehow. Maybe not so simple or local after all but it isn’t always evident if just talking in vague English about the language. I was going to prototype the concept with Oberon, too, since it’s so small and easy to understand.

                                                                            “but because it lacks semantics.”

                                                                            I didn’t think about that. You have a good point. Might be worth formalizing some of the details to see what happens. Might get messier as we formalize. Hmm.

                                                                            “So the deeper question I have is: how small can you make a language with”

                                                                            I think we have answers to some of that but they’re in pieces across projects. They haven’t been integrated into the view you’re looking for. You’ve definitely given me something to think about if I attempt a C-like design. :)

                                                                    2. 4

                                                                      He also says that the issues with memory-safety in C are overrated, so take it with a grain of salt.

                                                                      1. 13

                                                                        He is not claiming that memory safety in general is not an issue in C. What he is saying is that in his own projects he was able to limit or completely eliminate dynamic memory allocation:

                                                                        In the 32 kloc of C code I’ve written since last August, there are only 13 calls to malloc overall, all in the sokol_gfx.h header, and 10 of those calls happen in the sokol-gfx initialization function

                                                                        The entire 8-bit emulator code (chip headers, tests and examples, about 12 kloc) doesn’t have a single call to malloc or free.

                                                                        That actually sounds like someone who understands that memory safety is very hard and important.

                                                                        1. 3

                                                                          Not at all the vibe I got from it.

                                                                        2. 4

                                                                          I’m not familiar with either of those languages, but any idea what the author means by this?

                                                                          I’m also way more interested in Zig than I am in Rust.

                                                                          What I think he’s saying is that the two “big” languages are overhyped and have gained disproportionate attention for what they offer, compared to some of the smaller projects that don’t hit HN/Lobsters headlines regularly.

                                                                          Or maybe it’s a statement w.r.t. size and scope. I don’t know Swift well enough to say if it counts as big. But Rust looks like “Rubyists reinvented C++ and claim it to be a replacement for C.” I feel that people who prefer C are into things that small and simple. C++ is a behemoth. When your ideal replacement for C would also be small and simple, perhaps even more so than C itself, Rust starts to seem more and more like an oil tanker as it goes the C++ way.

                                                                          1. 3

                                                                            I agree with your point on attention. I just wanted to say maybe we should get a bit more credit here:

                                                                            “compared to some of the smaller projects that don’t hit HN/Lobsters headlines regularly.”

                                                                            Maybe HN but Lobsters covers plenty oddball languages. Sometimes with good discussions, too. We had authors of them in it for a few. I’ve stayed digging them up to keep fresh ideas on the site.

                                                                            So, we’re doing better here than most forums on that. :)

                                                                            1. 2

                                                                              Sure! Lobsters is where I first learned about Zig. :-)

                                                                        1. 1

                                                                          I think a BGAN is your best bet, even if it’s expensive. Alternatively, something from Iridium, like the RockBlock. Keep in mind that you won’t be pushing a whole lot of bandwidth out of it.

                                                                          1. 10

                                                                            Cool, we get some Python slicing tricks now. All you have to do is rotate the colon 90 degrees :-)

                                                                            Ruby:

                                                                            "abcd"[1..]
                                                                            => "bcd" 
                                                                            

                                                                            Python:

                                                                            "abcd"[1:]
                                                                            => "bcd"