1.  

    Or even unikernel on bare metal!

    I think that what we often ask for when we use a hypervisor is an array of similarly manageable machines that we can spin or stop.

    It would be interesting to get unikernels spawned via PXE boot on a couple of managed hosts. Bare metal metal.

    Spawn a unikernel on hardware like one would spawn a process. More, smaller machines.

    No vertical scaling though…

    1.  

      What’s the point of doing this? The best I could come up with is some extremely weak argument about how replication == good for backup purposes. But I doubt the stable trees are at any risk of disappearing if kernel.org were to be victim of ransomcrap.

      1.  

        Oh no, no, no, it’s the other “weightier” stuff. It’s things that make sense in engineering in general. Like, a function that does one thing and doesn’t affect surroundings in unexpected ways is as good a thing in Java, as it is in Scheme. Or, say, the idea that you need a queue in between producers and consumers. Or various implications of forks vs. threads vs. polling loops. Or understanding why you can’t parse HTML with a regex.

        Knowledge like this weighs more than knowing how do you sort an array in your current language/framework.

        1.  

          Threads! Maybe there is some way to limit privileges per thread (I’ll read the doc). Then while it is still the same memory space, one thread that have no network access, one thread that have no storage access. Woops, am I baking thread-ish processes? Not the good approach for unikernels…

          I’ll think again…

          1.  

            Is it not easier to debug when the source is fully available ? * hum hum * ;)

            1.  

              Nice to know there is some privilege structure that remains. The article linked talks for itself.

              But I get the idea : noone will ever log (or break) into the kernel and an attacker will have to think again (I still don’t consider SSH with a strong key and no password a large threat).

              It looks like a reversed approach: instead of dropping privileges as the program goes (changing for a different user than root, chroots, pledge, capabilities, read-only filesystems…), do not expose the vunerable resources on the first place: filesystem privileges can be implemented by a fileserver in which the unikernel logs in, or that only expose (or have at all) the data the application is allowed to access.

              But there are many cases where there is a need to parse complex strings (risky) theb do something, then send the result back.

              To mitigate the risk, it is possible to split the job into multiple processes : one that parse, one that does and send back, so that the risky work of parsing the user input gets done in a separate memory space and process as the one that have full access to the protected data.

              How to do this with unikernels? Many tiny unikernels with networking to communicate rather than a pipe()fork()? Then there is a need for strong firewalling between the two or implementing authentication and crypto for every pipe replaced. This adds complexity and TCP overhead and require more configuration so that the two point at each other.

              Yet it permits a much more distributed system where it is possible to have a lot of distributed parsers and a few workers or the opposite.

              1.  

                I wonder what this will do to Valve/Steam, which includes a LOT of ubuntu-built 32-bit libraries in the Steam runtime to support games.

                Luckily, for me at least on Arch Linux, forgoing the Steam runtime entirely and using 32-bit distro libraries seems to work quite well.

                1.  

                  That’s clever. I was so pleased when I went into readability mode and it actually worked!

                  1.  

                    I’m seeing a lot of blind leading the blind posts on here in the last couple of days. Flagging these as spam is taking far to long to have them removed.

                    1.  

                      Oh great, another currency. I’m so excited. The world really needed facebux to solve its problems.

                      At least it’s not PoW I guess.

                      1.  

                        This is something different again - you can’t learn “the weightier shit” unless you choose a language or two and stick there. Humans, even the most intelligent ones, can only learn so many things at once.

                        So, my point stands. Choose a subset of tools and go deep. Go deep means, in addition to mastery of that particular tool or language, that you learn “the weighty shit” :)

                        1.  

                          And how might one attain the level of depth needed to do a given job if one spends one’s time chasing bright shiny new languages and tools?

                          You’re right, we’re speaking in generalities, and I’m sorry my use of the word MUST seems to have triggered you, but my general point still holds, even if you downgrade the word in question to, say, a lowercase ‘need to’?

                          1.  

                            That’s an intentional part of the CSS. I wanted it to look hackery.

                            1.  

                              Anyone used dynamic programming in the real world or know of libraries that have this kind of algos implemented?

                              Dynamic programming algorithms are at the core of many, many bioinformatics and computational biology solutions (DNA sequencing, the human genome, and etc). One of the earliest was Needleman-Wunsch (circa 1970). Local alignments, global alignments, different cost structures all took off from the same basic trick.

                              1.  

                                Microkernels. Micro services. Micro front ends. I have concluded that all micro things are all hype and best avoided. Reasonable rule?

                                1.  

                                  This is wrong. All of my libraries work properly and will continue to work properly. Don’t believe that merely because some libraries aren’t written well, that none or a significant amount of them are. I’m inclined to believe most of the libraries are written by competent programmers and according to the standard.

                                  The last library you shared (the alternative to uiop:quit) is most definitely not written in portable Common Lisp so as the /u/cms points out the implementations may change their may change their APIs and the code would need to be updated.

                                  1.  

                                    if using the old sbcl that comes with debian

                                    The problem is more likely due to the fact that you are using the version packaged by Debian instead of your SBCL being old. You should avoid all lisp software packaged by Linux distributions, they tend to give you nothing but trouble.

                                    However it is true that not all Lisp code is portable, especially with the implementation-compatibility shims that are becoming more common. And while one is likely to encounter code that uses implementation specific extensions there tends to be a fallback for when the feature is not available. As a data point I’ve loaded software from before ASDF (that used MK:DEFSYSTEM) with little modifications.

                                    1.  

                                      Thanks very much. Super interesting, I’ll need to read that in depth.

                                      Kind of a pity he didn’t also leak a query log, be interesting to see just how narrow band the average XKeyscore search was :)

                                      1.  

                                        I still remember the promise bitcoin made: “no more goobermint stealing your money to bail off the banks!!!!” .

                                        Megacorps will soon be real. And to think this one got started from a bunch of teenagers trying to find out who’s single so they can smash.

                                        What a great timeline this is.

                                        1.  

                                          Just an FYI for those who might not know, functools.lru_cache in the standard Python library provides for easy memoization in Python. Caching can be messy, but if you’re working on some simple scripts or something like analytics, then this is an easy win (passing max_size=None to that decorator will make the cache be of infinite size).