1. 2

    I think the part I’m missing is why use TCP if you don’t want its features?

    1. 3

      I’m imagining this in the context of websockets, where you are stuck with TCP only. But in that context, you can’t control the browser’s TCP stack making retransmit requests, and usually streaming is from a server to a browser… So I’m not quite sure how this can be applied. It is clever, though.

      1. 2

        Because that’s how the Internet is. TCP works pretty much every time, but UDP doesn’t.

        Relevant presentation: http://dedis.cs.yale.edu/2009/tng/papers/pfldnet10-slides.pdf

      1. 15

        It’s only dead if you follow Apple blindly into the abyss. On other phones it’s not dead yet.

        1. 13

          Not yet.. Remember when you could get a smartphone with a keyboard?

          1. 10

            Those are only dead if you’re not following Blackberry blindly into the abyss.

          2. 11

            I’ll agree there, I want my phone to have a 3.5mm jack. I can’t image how putting the DAC on the cheap end of the equation (the earplugs) can improve quality over a simple and sturdy analog cable with a magnet on one end.

            1. 7

              Or Google… I imagine it must be hard at a third party Android device manufacturer to avoid the temptation of following the lead of the two big players.

              1. 9

                Google’s move with the Pixel was particularly shit because they made fun of Apple for getting rid of the jack, then got rid of it themselves.

                1. 2

                  I thought you were going to say something about search … I miss Yahoo/Lycos/Hotbot/Dogpile and getting different results that lead to different places. Fuck the search monoculture.

              1. 6

                Last week I finished up implementing live-streaming logs for the OfBorg build system used by Nixpkgs: https://github.com/NixOS/nixpkgs/pull/34386#issuecomment-361258104 / https://github.com/nixos/ofborg. (The self-inflated name of the bot started out as a joke / silly hacky thing and turned in to something serious.) The frontend was written by another member of the NixOS community, samueldr. The backend was nicely simple to implement from RabbitMQ: STOMP.js is a delight!

                This week I’m expanding its test runner to run the VM integration tests on aarch64 (https://github.com/NixOS/ofborg/issues/36) and hopefully work on coalescing build result comments (https://github.com/NixOS/ofborg/issues/47). This requires rewriting one of the last PHP components in Rust, which I’ve been wanting to do for some time now anyway. Combining the comments should open up new, interesting opportunities like automatically sampling pieces to build per PR, and perhaps separating build logs per attribute requested, which could also open up interesting future options…

                1. 1

                  This is pretty neat! I thought, how cool, it’ll be easier to discern similarly named variables… but then:

                  Variable names with similar prefixes will be assigned similar colors.

                  Hmm… maybe not. I wonder if it would be more helpful to make similarly prefixed variables use widely different colors? This mechanism isn’t a beauty contest: I think it would be helpful for tools to highlight their differences, instead of grouping them together.

                  1. 1

                    One mechanism of sharding that I think is much simpler and easier to scale is range-based sharding. In this scenario, you’d have the shards:

                    • customers-1-100
                    • customers-101-200
                    • customers-201-300
                    • customers-301-infinity

                    Here, when you start you can simply have customers-1-infinity and as the database begins to reach say 50% capacity, cap it at say customers-1-100 and then make customers-101-infinity.

                    Once the customers-1-100 shard grows to, say, 90% capacity, you can further, and simply split it in to customers-1-50 and customers-51-100, using fairly simple replication topologies to do this with little-to-no down time.

                    This range-based mechanism means you don’t have to preemptively guess how many shards you want to hash by.

                    Another way to do this would be to simply provision customers-1-100 and no -infinity shard, and monitor the highest customer ID you have, and preemptively provision another shard when you get “close” to the customer ID cap of the existing shard.

                    1. 9

                      Things I don’t like about Nix:

                      • People find it scary, and I would like to fix that.
                      • It has no incremental build support per-project. Chromium fail to build in the last step? Sorry, hav to start over.
                      • The command line interface is obtuse and hard to understand. Some commands work very differently from others , and leads to extremely confusing behavior. Hopefully this will be fixed in 1.12.
                      • The evaluator is a bit slow and memory inefficient, causing corner cases like checking every package description across every architecture requires too much RAM and CPU time.

                      Almost every other build tool:

                      • Undeclared. Dependencies.
                      • Improperly pinned dependencies without hashes, making it hard to know if 1.0.0 you got today is the same 1.0.0 you got yesterday (hint! it isn’t always the same!)
                      1. 4

                        The language badly needs a type system, and the cli tools are terrible. But it’s by far the best build/configuration management system I’ve ever seen.

                        1. 2

                          Undeclared. Dependencies.

                          Could you elaborate? The problem that you easily forget dependencies in e.g. Makefiles? The problem that transitive dependencies are not specified properly in e.g. npm?

                          1. 9

                            Given he’s contrasting to Nix, I assume he’s talking about (eg) ‘you need libxml2 installed systemwide for this to build’ not being specified in a machine-readable way.

                          2. 2

                            I found it really hard to understand even though I spent many, many days reading docs/community discussions and contributed many PRs and fixes. I still don’t really understand how nix works very well lol.

                          1. 30

                            All of them:

                            The fact that they exist at all. The build spec should be part of the language, so you get a real programming language and anyone with a compiler can build any library.

                            All of them:

                            The fact that they waste so much effort on incremental builds when the compilers should really be so fast that you don’t need them. You should never have to make clean because it miscompiled, and the easiest way to achieve that is to build everything every time. But our compilers are way too slow for that.

                            Virtually all of them:

                            The build systems that do incremental builds almost universally get them wrong.

                            If I start on branch A, check out branch B, then switch back to branch A, none of my files have changed, so none of them should be rebuilt. Most build systems look at file modified times and rebuild half the codebase at this point.

                            Codebases easily fit in RAM and we have hash functions that can saturate memory bandwidth, just hash everything and use that figure out what needs rebuilding. Hash all the headers and source files, all the command line arguments, compiler binaries, everything. It takes less than 1 second.

                            Virtually all of them:

                            Making me write a build spec in something that isn’t a normal good programming language. The build logic for my game looks like this:

                            if we're on Windows, build the server and all the libraries it needs
                            if we're on OpenBSD, don't build anything else
                            build the game and all the libraries it needs
                            if this is a release build, exit
                            build experimental binaries and the asset compiler
                            if this PC has the release signing key, build the sign tool
                            

                            with debug/asan/optdebug/release builds all going in separate folders. Most build systems need insane contortions to express something like that, if they can do it at all,

                            My build system is a Lua script that outputs a Makefile (and could easily output a ninja/vcxproj/etc). The control flow looks exactly like what I just described.

                            1. 15

                              The fact that they exist at all. The build spec should be part of the language, so you get a real programming language and anyone with a compiler can build any library.

                              I disagree. Making the build system part of the language takes away too much flexibility. Consider the build systems in XCode, plain Makefiles, CMake, MSVC++, etc. Which one is the correct one to standardize on? None of them because they’re all targeting different use cases.

                              Keeping the build system separate also decouples it from the language, and allows projects using multiple languages to be built with a single build system. It also allows the build system to be swapped out for a better one.

                              Codebases easily fit in RAM …

                              Yours might, but many don’t and even if most do now, there’s a very good chance they didn’t when the projects started years and years ago.

                              Making me write a build spec in something that isn’t a normal good programming language.

                              It depends on what you mean by “normal good programming language”. Scons uses Python, and there’s nothing stopping you from using it. I personally don’t mind the syntax of Makefiles, but it really boils down to personal preference.

                              1. 2

                                Minor comment is that the codebase doesn’t need to fit into ram for you to hash it. You only need to store the current state of the hash function and can handle files X bytes at a time.

                              2. 14

                                When I looked at this thread, I promised myself “don’t talk about Nix” but here I am, talking about Nix.

                                Nix puts no effort in to incremental builds. In fact, it doesn’t support them at all! Nix uses the hashing mechanism you described and a not terrible language to describe build steps.

                                1. 11

                                  The build spec should be part of the language, so you get a real programming language and anyone with a compiler can build any library.

                                  I’m not sure if I would agree with this. Wouldn’t it just make compilers more complex, bigger and error prone (“anti-unix”, if one may)? I mean, in some cases I do appriciate it, like with go’s model of go build, go get, go fmt, … but I wouldn’t mind if I had to use a build system either. My main issue is the apparent nonstandard-ness between for example go’s build system and rust’s via cargo (it might be similar, I haven’t really ever used rust). I would want to be able to expect similar, if not the same structure, for the same commands, but this isn’t necessarily given if every compiler reimplements the same stuff all over again.

                                  Who knows, maybe you’re right and the actual goal should be create a common compiler system, that interfaces to particular language definitions (isn’t LLVM something like this?), so that one can type compile prog.go, compile prog.c and compile prog.rs and know to expect the same structure. Would certainly make it easier to create new languages…

                                  1. 2

                                    I can’t say what the parent meant, but my thought is that a blessed way to lay things out and build should ship with the primary tooling for the language, but should be implemented and designed with extensibility/reusability in mind, so that you can build new tools on top of it.

                                    The idea that compilation shouldn’t be a special snowflake process for each language is also good. It’s a big problem space, and there may well not be one solution that works for every language (compare javascript to just about anything else out there), but the amount of duplication is staggering.

                                    1. 1

                                      Considering how big compilers/stdlibs are already, adding a build system on top would not make that much of a difference.

                                      The big win is that you can download any piece of software and build it, or download a library and just add it to your codebase. Compare with C/C++ where adding a library is often more difficult than writing the code yourself, because you have to figure out their (often insane) build system and integrate it with your own, or figure it out then ditch it and replace it with yours

                                    2. 8

                                      +1 to all of these, but especially the point about the annoyance of having to learn and use another, usually ad-hoc programming language, to define the build system. That’s the thing I dislike the most about things like CMake: anything even mildly complex ends up becoming a disaster of having to deal with the messy, poorly-documented CMake language.

                                      1. 3

                                        Incremental build support goes hand in hand with things like caching type information, extremely useful for IDE support.

                                        I still think we can get way better at speeding up compilation times (even if there’s always the edge cases), but incremental builds are a decent target to making compilation a bit more durable in my opinion.

                                        Function hashing is also just part of the story, since you have things like inlining in C and languages like Python allow for order-dependent behavior that goes beyond code equality. Though I really think we can do way better on this point.

                                        A bit ironically, a sort of unified incremental build protocol would let compilers avoid incremental builds and allow for build systems to handle it instead.

                                        1. 2

                                          I have been compiling Chromium a lot lately. That’s 77000 mostly C++ (and a few C) files. I can’t imagine going through all those files and hashing them would be fast. Recompiling everything any time anything changes would probably also be way too slow, even if Clang was fast and didn’t compile three files per second average.

                                          1. 4

                                            Hashing file contents should be disk-io-bound; a couple of seconds, at most.

                                            1. 3

                                              You could always do a hybrid approach: do the hash check only for files that have a more-recent modified timestamp.

                                            2. 1

                                              Do you use xmake or something else? It definitely has a lot of these if cascades.

                                              1. 1

                                                It’s a plain Lua script that does host detection and converts lines like bin( "asdf", { "obj1", "obj2", ... }, { "lib1", "lib2", ... } ) into make rules.

                                              2. 1

                                                Codebases easily fit in RAM and we have hash functions that can saturate memory bandwidth, just hash everything and use that figure out what needs rebuilding. Hash all the headers and source files, all the command line arguments, compiler binaries, everything. It takes less than 1 second.

                                                Unless your build system is a daemon, it’d have to traverse the entire tree and hash every relevant file on every build. Coming back to a non-trivial codebase after the kernel stopped caching files in your codebase will waste a lot of file reads, which are typically slow on an HDD. Assuming everything is on an SSD is questionable.

                                              1. 3

                                                I’ve had great luck with PC Engines’ APU2 platform: http://pcengines.ch/apu2.htm it may not tick all the boxes you need, but perhaps worth looking in to.

                                                1. 12

                                                  Sounds like they spent their bug bounty budget paying hush money on their data leak.

                                                  1. 9

                                                    The blog post is one-sided, and the comment from @andybons seems like every upvote is more about anti-Uber signaling than anything substantial.

                                                    But maybe they did spend their bounty budget on paying hush money, yeah. Sure.

                                                  1. 1

                                                    FCC “discusses” Net Neutrality

                                                    1. 1

                                                      My original title was

                                                      FCC “Debates” Net Neutrality

                                                      but didn’t want to get downvoted for editorializing in the title :)

                                                    1. 6

                                                      Congratulations, crew. This is a ton of work! NixOS has been working toward this for the last year as well. Nicely done.

                                                      1. 9

                                                        Using nix to distribute RhodeCode was the best decision we made for how we distribute our software. It removed all problems we had with python packaging just using virtualenv. I think the key benefit is that our installation survives system upgrades while we had almost 40% failure rate in this case with previous virtualenv based installer.

                                                        1. 5

                                                          Would you be interested in doing a whitepaper?

                                                          1. 3

                                                            Do you instruct end users to use Nix, or do you have some custom installer that somehow hides Nix from the users?

                                                            1. 5

                                                              We abstract it. End users just run a single binary that creates full nix env, and unpacks all dependencies into a custom nix store.

                                                              Nice things are that on upgrades we still have the full tree of old dependencies, so, for example, rollback to the previous release is just swapping a symlink.

                                                              1. 1

                                                                This reminds me of the guix pack command: https://www.gnu.org/software/guix/blog/2017/creating-bundles-with-guix-pack/

                                                                Would something like that (e.g. ported to Nix rather than Guix) make life easier? I imagine that including Nix itself in the artefacts you distribute would make things quite bloated. (I’ve never used Guix, but I’m a big Nix fan :) )

                                                          1. 2

                                                            I am still a bit confused by the Nix vs. Guix thing. Not that I am against having two similar projects per se, but I don’t know.

                                                            1. 11

                                                              Guix took parts of the Nix package manager and retooled them to use Scheme instead of the Nix language. This works because the Nix build process is based off a representation of the build instructions (“derivations”) and has nothing to do with the Nix language. Guix is also committed to being fully FOSS, whereas the Nix ecosystem will accept packages for any software we can legally package.

                                                              1. 9

                                                                Also there is of course accumulated divergence as people with some particular idea happen to be in the one community and not the other.

                                                                Nix has NixOps and Disnix, but there still is no GuixOps.

                                                                On the other hand I believe the guix service definitions are richer, and guix has more importers from programming-language-specific package managers, but then on the third hand the tooling for Haskell and Node in particular is better in nix.

                                                                Nix supports OSX, FreeBSD and cygwin, guix supports the Hurd.

                                                            1. 10

                                                              I love Rust and I know this is gonna get the whole “Python is boring. STFU” crowd down on me, but I’m honestly not sure that Rust’s level of abstraction is ideal for the vast majority of devops tasks. Sure there are plenty of performance intensive cases where Rust could really shine, but I think languages like Python and Go (which I have the same abstraction issue with FWIW but it’s at least a bit higher up the stack AFAICT) may retain the advantage for a while at least until a set of very solid libraries to perform common tasks become mature and stable.

                                                              1. 7

                                                                I definitely agree that often, working at a higher level of abstraction can be more useful. One of the things that I love about Rust is that it doens’t have to be either/or; for example, the new Conduit tool uses Rust at its core, but with Go layered on top.

                                                                https://www.reddit.com/r/programming/comments/7hx3lk/the_rise_of_rust_in_devops/dqut2cl/ is an interesting thread developing where some people are talking about why they would or wouldn’t choose Rust here; I think there’s many viable answers!

                                                                1. 3

                                                                  Surely there’s some middle ground between Rust’s “we’re going to use a language designed to minimize runtime costs for a task that is inherently IO-bound” and Python/Go’s “we’re going to basically throw out types”.

                                                                  1. 2

                                                                    What about something like mypy ?

                                                                  2. 2

                                                                    Can you say more about in what ways you feel rust is too low in the stack compared to Python? One reason I like Rust a lot is I can easily make great higher level abstractions in my programs, and still retain the safety around types and borrowing. I’ve been bitten too many times with bad concurrency implemented in “simple devops scripts” to want to return to that world.

                                                                    1. 5

                                                                      I think if you’re doing concurrency then traditional very high level procedural languages like Ruby and Python are a very poor choice.

                                                                      In concurrent applications, any of the abstraction complaints I might have with Rust fade away because you MUST think about things like memory allocation and structure in the concurrent problem space.

                                                                      This is the danger in (my) speaking in generalities. In my 25 years of doing infrastructure work, I have yet to encounter a problem that truly demands a solution involving concurrency. I recognize that this is merely anecdotal evidence, but be that as it may, I prefer to work in languages that handle the details of memory allocation for me, because in every use case I’ve thus far encountered, that level of performance is Good Enough.

                                                                      That said, a couple of examples of aspects of Rust I would feel are in the way for the kind of work I mostly need to do:

                                                                      • Pointers and dererencing / destructuring
                                                                      • The ownership and borrowing memory model

                                                                      I am not a Rust expert, but I read and worked through the entire Rust book 1.0 a couple of years back, which left me with a deep abiding respect for the power of Rust. I just don’t feel that it’s suited to the kinds of work I do on a day to day basis.

                                                                      1. 1

                                                                        In my 25 years of doing infrastructure work, I have yet to encounter a problem that truly demands a solution involving concurrency.

                                                                        I have, though I’ve generally still used python to handle it (the multiprocessing standard library is super-handy). My use cases have been simple, though. All of them can boil down to: process a big list of things individually, but in parallel, and make sure the result gets logged. No need for memory sharing, just use a queue (or even split the list into X input files for X processes).

                                                                        That said, I don’t think rust would be a bad language to handle those problems. In fact, python might have been a terrible one. Just wanted to say that even workloads that require concurrency often end up very simple.

                                                                  1. 16

                                                                    I’m not a devops guy, but I use Nix for this sort of thing in my personal projects.

                                                                    A less invasive approach might be Dhall, which I’ve not used but AFAIK you can write your config with functions, etc. and run a command to “compile” it into e.g. a JSON file.

                                                                    1. 7

                                                                      I can vote for Nix as well, it can do exactly what OP needs. Nix is a lazy, functional language pretty much made for writing configuration. Specifically Nix can solve their problem because:

                                                                      • It can build output paths, which can for example contain a from Nix generated config, along with the parameters used.
                                                                      • Fetch Nix expressions from a repository and evaluate them, this let’s you parameterize your repos.
                                                                      • Nix has lists and attrsets (records), which represent structured data.
                                                                      • Makes it very easy to deal with environment variables

                                                                      Edit: As an interesting example, check out my ssh keys NixOS configuration which sets up an nginx virtual host for all my public keys (declared in another Nix file) including an automatically renewing Let’s Encrypt certificate. The function to convert an attrset to a folder structure I wrote is here (should probably make a PR to add it to nixpkgs).

                                                                      Nix has so much more to it though, there’s NixOS built on Nix with an insanely powerful module system, there’s NixOps for deploying NixOS machines. And most importantly nixpkgs and the Nix package manager which builds packages in a reproducible fashion and enables a lot of stuff you wouldn’t even have thought of. For the interested I can recommend dropping in #nixos on Freenode.

                                                                      1. 4

                                                                        thumbs up, dhall looks esoteric, but actually seems like an amazing solution for making statically typed config generators.

                                                                        1. 2

                                                                          Both nix and deals look neat, but for ops the last thing we want is more esoteric things. Life is hard enough using normal tools!

                                                                          1. 3

                                                                            I don’t think you understand the point of it, sometimes you need to step outside of the normal things to make life simple again.

                                                                            1. 3

                                                                              I’d recommend considering that the reason life is so hard with normal tools is because they’re all trying to solve the same problem in the same fundamental way. Trying new tools that work the same way is going to ultimately end up in the same painful position all the other ones have lead us to. This is why I recommend trying Nix.

                                                                              1. 1

                                                                                I totally agree…

                                                                                I recently shifted my desktop to nix, it is fundamentally different, and solves all these problems better. I was experimenting with broken configs, and could revert the whole OS atomically via grub. That is power I haven’t seen anywhere else, and yet I didn’t even need to know how it works, it was just an option presented to me when I booted.

                                                                                My next set of servers is going to be nixops, or something directly inspired by it.

                                                                              2. 1

                                                                                I think it’s relative: I think if you’re using Chef/Puppet/Ansible/Docker/etc. then Nix is just another alternative, which IMHO is cleaner. It’s got a decent community, commercial users, etc. so if you’re thinking about adopting one of these technologies, Nix is definitely worth a shot and I wouldn’t consider it esoteric at all.

                                                                                The problem with Nix for a situation like this is that it’s quite heavyweight; just like switching a server over to Puppet or something is also a heavyweight solution to the problem of “put different values in these config files”. I would say Dhall is certainly esoteric, but I think it’s worth a look for solving this problem since it’s so lightweight: it’s just a standalone command which will spit out JSON. In that sense, I think it’s a more reliable alternative to hand-rolled scripts, which wouldn’t require any extensive changes to implement (call it from bash, like anything else; use the JSON, like anything else).

                                                                            1. 4

                                                                              As a comcast customer, I feel this project is /perfectly/ named!

                                                                            1. 1

                                                                              Even better, the “password” will be stored by regular autocomplete!

                                                                              Of course, it’d be much more of a pain to think of an implement this hack instead of, you know, applying TLS. I don’t think you’d be judged for using even Let’s Encrypt if your budget simply doesn’t allow for certificates.

                                                                              1. 2

                                                                                Is there something about LE that makes it “less than” which a business would be judged?

                                                                              1. 2

                                                                                Why would I want to go back to the dark ages of managing my own dependencies and updates and updates to my dependencies? Sounds like a nightmare.

                                                                                1. 2

                                                                                  first, there was the dark ages of managing your own dependencies. Then came the new age, where people began relying on package managers and upstreams to handle dependencies for them. This was revealed as a false dawn. Package managers and upstreams are no better, and frequently far worse, at managing your dependencies than you are. Now we are tentatively exploring other alternatives, in which it is possible to lock down dependencies. This is probably not working either. Simple solutions may not be forthcoming.

                                                                                  1. 4

                                                                                    Nix gets this right by allowing multiple versions of a package to coexist. In retrospect, it’s kind of unfortunate that the nix community put their energy into making a Linux based on it rather than trying to get it adopted as a base-package manager for more use cases.

                                                                                    1. 1

                                                                                      What sort of use cases are you imagining?

                                                                                      1. 5

                                                                                        Every language has its own package manager and, IMO, they all mostly suck. If I can’t get my one big universal package manager based on nix, I wish at least pip, opam, maven, etc etc were based on nix.

                                                                                        1. 1

                                                                                          Yeah. I try as much as I can to stick to packages in my OS unless I really want to play with bleeding-edge libraries for something :)

                                                                                        2. 2

                                                                                          To be clear, I’m saying I wish nix was the base technology these language specific package mangers were implemented on, even if they have their own repositories and UIs and all that.

                                                                                        3. 1

                                                                                          The way nix solves this is great, but most good OS systems also allow multiple major versions to coexist when necessary, it’s just less considered the major, core feature that Nix has made it.

                                                                                          1. 1

                                                                                            This is based on the assumption the different minor and patch versions are compatible, or at least backwards-compatible.

                                                                                            Nix is built on the observation that they can’t be guaranteed to be.

                                                                                        4. 2

                                                                                          At least when an OS handles the dependencies you get a single set installed that works for all apps instead of whatever I as the developer thought worked on my machine.

                                                                                          But I agree, nothing is simple and someone always has to do the work.

                                                                                          1. 1

                                                                                            you, the developer, are more right than everyone else is about what version of dependencies are appropriate for your software. This is further true for each and every developer of each and every software. The best an OS can do is try to make things safe for everyone generically, which constantly causes dependency-version-fighting between bundles, forced upgrades, and the decoupling of the dependency from its original use.

                                                                                            1. 0

                                                                                              you, the developer, are more right than everyone else is about what version of dependencies are appropriate for your software.

                                                                                              Except when you’re not. Lots of developers publish dozens of packages and can’t really be bothered to keep track of all dependencies and any security updates.

                                                                                        5. 1

                                                                                          There is a AppImage daemon that is supposed to update appimages if they embed a special manifest.

                                                                                          What I personally like about AppImage is that it’s much simpler than using the package manager; download a file and run it.

                                                                                          On the other hand, yes it’s true, with appimage you don’t get all those nice updates of dependencies being shoved into everything you run on your computer. But docker has stopped that trend nicely anyway, atleast on the server and when heavily relied on.

                                                                                        1. 4

                                                                                          I hope he is going to stop talking about it soon. (placeholder 2)

                                                                                          Love these testimonials!

                                                                                          But seriously, it looks good. SVN is great for this sort of thing.

                                                                                          1. 17

                                                                                            Denial of service seems like a better description than privilege escalation. In the taxonomy of bad stuff, the latter usually implies getting to do something more interesting than halt.

                                                                                            1. 4

                                                                                              the fact that we found this accidentally and that the behavior is exactly what you’d expect if there were no permissions check for the kill call at all leads us to believe that there is likely more that can be done to exploit this issue

                                                                                              There’s nothing found yet, but it does give cause for some concern that the means of denying service is what appears to be escalation.

                                                                                              1. 4

                                                                                                I was part of the team helping Shea to research and disclose the issue. One key finding was in the logs we saw <unprivileged user> killed <privileged process>, indicating that we hadn’t tripped just a crashing bug, but actually escalated beyond the normal access control protections of kill.

                                                                                                1. 9

                                                                                                  Privilege escalation is when you increase the abilities of the attack code to do what a higher-privileged account or process can do in arbitrary ways. This includes opening, modifying, and/or destroying resources. Merely terminating a resource is a Denial of Service (DOS) attack on that resource. The title is wrong.

                                                                                                  1. 3

                                                                                                    Using Privilege Escalation instead of DoS in the title is still misleading. Most people assume that something marketed as Privilege Escalation lead to at the very least reading or writing resource owned by root. I can already kill privileged process by running shutdown (I know that’s not the point, but killing ALL system’s process is still far from running code as root).