1.  

    Dunno how I feel about this. I don’t run NextCloud yet but I’ve been considering it, and my experiences running a Mastodon instance for a few months left me unwilling to try again. Maybe they’ve scaled the requirements, footprint and admin surfaces back to a sane level.

    1. 7

      Actually this is a ActivityPub implementation, not a Mastodon one, which actually makes this a little more interesting.

      By using the popular ActivityPub standard, Nextcloud users can subscribe to and share status updates with users in the so-called ‘fediverse’, an interconnected and decentralized network of independently operated servers!

      Mastodon is probably the best known implementation of ActivityPub protocol, but there are actually a bunch of federated applications based around ActivityPub. For example there is also:

      • PeerTube (YouTube-ish)
      • PixelFed (Flickr/imgur-ish)
      • Diaspora (Facebook-ish)

      One of the kind of cool things is that all of these applications are using the same federated publishing protocol, they can federate with each other. I can reply to a thread on PeerTube in Mastodon, and PeerTube will understand it as a reply and display it as such, or you can publish an album on PixelFed and I can see as a posted album in my Diaspora.

      1.  

        I didn’t realize that Diaspora had joined the Fediverse! Good on them!

      2.  

        running nextcloud is pretty easy with docker

        1.  
          1.  

            Running it might be easy, administrating and keeping it running never is. This goes for pretty much every server software out there.

          2.  

            The post doesn’t seem to say either way whether it’s a reskinned Mastodon server or an independent reimplementation, does it?

            1.  

              No it doesn’t give much detail at all. If it’s a reskinned stock mastodon server, that’s a hefty chunk of infrastructure required to run the thing (PostgresSQL, Redis for Sidekiq, etc.) and a lot of under the hood complexity go wrong.

              I have mad respect for Eugen and the work they’re doing, but if it is in fact a stock Mastodon server, I’m out. I’m not a Ruby on Rails hacker and don’t have time to become one, and my installation hosed itself pretty hard.

              1.  

                There’s always Pleroma if you want lightweight ActivityPub.

                1.  

                  Pieroma is lightweight, but its upgrade story and to a lesser extent its installation story are … Putting it kindly, lightly sketched out :)

                  You have to want to become an Elixir/Phoenix hacker if you really want to run a Pieroma with confidence. Not that that’s a bad thing at all mind, but you should be aware of that before you sign up.

                  At least that was the case a few months ago when my Mastodon instance ate itself.

              2.  

                Looking at the source code it looks like it’s a PHP backend like the rest of NextCloud with a Vue.js frontend

                From a cursory inspection it doesn’t look like they’re running all the infra necessary to run a full Mastodon node.

                I suspect but don’t know that you’re actually just using their app to federate from one of their instances they’re running behind the curtain, but again I have no bullet proof evidence on that.

                1.  

                  ActivityPub is an open standard with some lightweight implementations (Mastodon is not one of them). From my cursory look at the source, I think this is a full activitypub compatible server.

            1.  

              I saw this and got excited for a minute thinking that the editor is programmable in Rexx :)

              1.  

                Mostly working an Exercism exercise involving leap year detection, in C++ since that’s what I’m working on learning. I’ve implemented it the simple minded way using 3 conditionals just fine, but my mentor is guiding me towards a more elegant solution involving a single complex Boolean logic conditional instead.

                it’s exactly the kind of hard work I need to be doing to up my game, and while I’m kind of embarrassed to admit how hard I’m finding it to come to a working solution, there’s only one way to build up a skill or a muscle - just keep working it, intelligently, guided by someone who’s an expert.

                I’m almost there - I think I’m mis-understanding the stated formula - right now the only test that’s failing is the year 1900.

                Also slowly moving through Advent of Code because most of my limited brain power is going to the above :)

                1. 6

                  This feels like a poster child for the knock on effects of a sizable increase in complexity. I get why distros are switching to systemd, it offers some very real benefits, but exploits like this remind us that said benefits come with a price tag.

                  1. 5

                    This specifically doesn’t seem like a good argument about complexity. Even the simplest program can confuse signed and unsigned integers.

                    1. 3

                      But you are more likely to find it in a smaller program

                      1. 1

                        It’s a lot harder in languages that don’t let you transparently mix the two up. Rust, natch, but C# did it first.

                    1. 7

                      Can someone explain to me, a Rust tourist at best, why async/await are desirable in Rust when awesome sauce concurrency because of the ownership / borrowing model have been baked into Rust since its inception?

                      FWIW I also really like the idea of working groups, and I think focusing on the areas where Rust gets the widest usage is super smart.

                      1. 15

                        The current Futures implementation exerts a lot of “rightward pressure” when you’re trying to chain multiple future results together. It works, and works safely, but it’s a bit messy to work with and there’s a lot of nesting to deal with, which isn’t easily readable.

                        The async/await proposal is basically syntactic sugar to linearize logic like that into a straight-line set of reasoning that’s a lot easier to work with.

                        1. 15

                          The biggest problem with the current Futures, as far as my experience goes, is that the method-chaining style involves so much type inference that if you screw up a type somewhere the compiler has no prayer of figuring out what you meant it to be, or even really where the problem is. So you have to keep everything in your head in long chains of futures. I’m expecting async/await to help with this just by actually breaking the chains down to individual expressions that can be type-checked individually.

                          Edit: And it’s desirable in Rust because async I/O is almost always(?) going to be faster than blocking I/O, no matter whether it’s single threaded or multi-threaded. So it doesn’t necessarily have anything to do with threads, but rather is an orthogonal axis in the same problem space.

                          1. 5

                            I hope a lot of care is taken to make it easy to specify intermediate type signatures. I know that in other languages with type inference I’ll “assert” a signature halfway through some longer code mainly as docs but also to bisect type error issues.

                            1. 1

                              Totally agreed. As far as I understand (which is not much), saying async foo(); is similar to return foo(); in how the language treats it, so you should be able to get the compiler pointing quite specifically at that one line as the place the type mismatch occurs and what it is. If you have to do foo().and_then(bar).and_then(bop); then it just says “something went wrong in this expression, sorry, here’s the ten-generic-deep nested combinator that has an error somewhere”.

                              1. 1

                                Async is the easier part. async fn will be sugar:

                                async fn async_fun() -> String {
                                  // something
                                }
                                
                                fn async_fun() -> impl Future<Item=String> {
                                  // something
                                }
                                

                                In the back, this builds a Generator. await is setting up the yield points of the generator.

                                async fn async_fun() -> String {
                                  let future = futures::future::ok(String::from("hello, i'm not really asynchronous, but i need quick example!"));
                                  let string: String = await!(future);
                                  string
                                }
                                

                                So yes, the type mismatch would occur at the binding of the await and the right hand side is much easier to grasp. Basically, “and_then” for chaining can now largely be replaced by “await”.

                            2. 1

                              Ah, you’re right. I SHOULD know this in fact from the bad old days of Java when “Non Blocking IO” came out :)

                              1. 1

                                This has pretty much been my only major negative with rust up to this point, i’ve got three apps underway in rust all using Futures and it just starts getting hairy when you get to a certain level of complexity, to the point you can be hammering out code and when you get to your Futures chaining it stops you dead in your tracks because it’s hard to read and hard to reason about quickly. So i’m on board with async/await reserves for sure.

                              2. 3

                                This sums it up very well. I can do everything I personally want to do with Futures as they exist now in Rust. That said, I feel like async/await will really clean things up when they land.

                                1. 2

                                  That’s interesting! I guess I’d mostly thought of async/await as coming into play in languages like Python or Javascript where real concurrency wasn’t possible, but I suppose using them as a conceptual interface like this with real concurrency underneath makes a lot of sense too.

                                2. 9

                                  I believe async/await are desirable in all languages that implement async I/O because the languages usually walk this path, motivated by code ergonomics:

                                  1. Async I/O and timing functions return immediately, and accept a function to call (“callback”) when they’re done. Code becomes a pile of deeply nested callbacks, resulting in the “ziggurat” or “callback hell” look.
                                  2. Futures (promises) are introduced to wrap the callback and success/failure result into an object that can be manipulated. Nested callbacks become chained calls to map/flatMap (sometimes called then).
                                  3. Generators/coroutines are introduced to allow a function to suspend itself when it’s waiting for more data. An event loop (“executor”, “engine”) allows generators to pause each time a future is pending, and resume when it’s ready.
                                  4. “async”/“await” keywords are added to simplify wiring up a promise-based generator.

                                  In rust’s case, I think it was “implement generators/coroutines” which hit snags with the existing borrow checker.

                                  There’s a cool and very in-depth series of articles about the difficulty of implementing async/await in rust starting here: https://boats.gitlab.io/blog/post/2018-01-25-async-i-self-referential-structs/ (I’m pretty sure this was posted to lobsters before, but search is still broken so I can’t find it.)

                                  1. 8

                                    “async”/“await” keywords are added to simplify wiring up a promise-based generator.

                                    Going further: it follows the very general algebraic pattern of monad. Haskell has “do-notation” syntax which works for Promises but also Maybe, Either, Parser, etc.

                                  2. 8

                                    In addition to the great explanations of others, here are a couple diffs where the Fuchsia team at Google was able to really clean up some code by switching to async/await:

                                    1. 1

                                      Interesting! That speaks to the Rust 2018 initiative’s focus on ‘embedded’ in the mobile sense.

                                      1. 3

                                        The initiative has been surprisingly successful. Most of my clients are currently on embedded Linux and smaller.

                                  1. 1

                                    The latest Linux Unplugged podcast has an interview with the author which I rather enjoyed. A bit light on detail but still interesting.

                                    The compatibility list is impressive when you think about what’s going on under the covers. I wish I understood more about low level X86/IOS/ARM internals so I could help get ssh working.

                                    Also also, someone should invite the author to Lobsters! What a great addition to the community :)

                                    Being able to install packages for Vim, python, curl etc on an IOS device is rather a thrill (Yes I’m that kind of NRRD and yes I know IOS is unspeakable evil. However it drives like a Cadillac and has apps you can’t find elsewhere which is why I live there for my super mobile computing needs :)

                                    1. 1

                                      This is great, but I had an almost visceral negative reaction when I saw it was hosted on Medium.

                                      1. 8

                                        “A computer of one’s own; brought to you by a computer of someone else”

                                        1. 12

                                          OK, so, I totally get this reaction, but I’m gonna pick on technomancy despite my mad respect for a bunch of what he does :)

                                          This kind of reasoning is faulty. Tools are tools. Everyone has to make a cost / value judgement based on what makes sense for them not you.

                                          I build and run infrastructure for a living. I’m pretty damn good at it. That said, I run my blog on Wordpress.com

                                          Why, you might ask? Because I ran my own Wordpress instance for years, then a statically generated blog using Pelican (a fantastic Python based static site generator I can’t recommend enough)

                                          And then, after a few years of that, I decided that running my own Blogging infrastructure Just Wasn’t Interesting any more. I’d learned all I could learn from doing so, I’d even developed a nice set of Chef cookbooks to manage the whole thing for me.

                                          But ultimately, the static blogging experience didn’t work for me. So now I let someone else manage this particular problem for me for a pittance, and focus on actually doing what I want to do.

                                          So can we please consider taking a step back before we criticize others for making decisions that make sense for them?

                                          1. 1

                                            Thanks for your comment.

                                          2. 7

                                            I’m sorry, but this was the platform/blog we could setup in such short notice.

                                            1. 6

                                              Please don’t apologize. Thank you for writing these articles. Don’t take this kind of feedback too personally, it’s the price we pay for having a community of insanely smart, talented people read what we write :)

                                              1. 1

                                                Hey, it’s your site; I don’t know anything about the audience you’re trying to reach beyond lobsters, so I can’t speak to the trade-offs. It’s just that the irony in the title was just too much for me to resist. =)

                                                1. 3

                                                  Should I host my blog out of my own physical machine to resist the irony…

                                            2. 2

                                              It reads fine with javascript and even CSS turned off (or if entirely unsupported by your web browser).

                                              1. -2

                                                Me too. I’m glad I’m not the only one who has that reaction.

                                                1. 2

                                                  As already replied to the other person, this was the platform/blog we could setup in such short notice. Thanks for understanding.

                                              1. 1

                                                This article cites some interesting trends. I need to read a bit more carefully to see where they’re getting their data.

                                                1. 3

                                                  These are great! I wish they were longer :)

                                                  1. 4

                                                    Thanks, we are happy you like them.

                                                    Due to time constraints we are having them these long, but our goal is to expand them in an upcoming book.

                                                    1. 2

                                                      Even better! Please post here when it’s published! I will buy several copies as gifts :)

                                                      1. 1

                                                        In the posts we left a subscribe form. There we will announce progress on the book project

                                                  1. 2

                                                    Man, I loved REXX. Only used it in IBM VM a bit but did a whole bunch with ARexx back in the day :)

                                                    1. 12

                                                      Have you not looked at the 90 grazillion static website generators out there and all the websites they run?

                                                      What gets the most buzz is not necessarily reflective of reality on the ground. Look at 10 years of articles crooning “JAVA IS DEAD!” yet as recently as a handful of years ago HUGE companies have committed to a pretty much 100% Java platform.

                                                      1. 5

                                                        I was about to say something like this. There’s actually a lot of lightweight sites out there, made by people who are either sick of the modern bloated web, who don’t want to pay for a ton of bandwidth, or who just find the workflow nicer.

                                                        One amusing thing is this site still pulls in a bunch of crap from the GOOG - fonts and analytics it seems

                                                      1. 10

                                                        Part of the problem that I don’t see many people talking about is that maintaining a web browser and the concordant renderer, javascript core etc is becoming totally intractable due to the ballooning complexity and size of the platform.

                                                        I wonder if anyone is giving any thought to how to counter that particular problem.

                                                        1. 2

                                                          I always thought they shouldve tried charging for it. Some version enterprises pay for with some benefits that would attract them.

                                                        1. 2

                                                          Really enjoyed this article! Am definitely going to give this year’s puzzles a whirl in C++ since I’m learning that now.

                                                          (Though I’m terribly stuck in Chapter 5 of Stroustrup’s Tour of C++ on copy/move :)

                                                          1. 3

                                                            If you can easily access a copy, I think that Scott Meyers does a better job covering the copy/move distinction in Effective Modern C++. You’d want to look around page 350 or so.

                                                            That section of that book, incidentally, also has one of my favorite examples of how complicated C++ has gotten:

                                                            std::move doesn’t move anything, for example […]. Move operations aren’t always cheaper than copying; when they are, they’re not always as cheap as you’d expect; and they’re not always called in a context where moving is valid. The construct type&& doesn’t always represent an rvalue reference.

                                                            It’s occasionally just nice to know that it’s not you, it’s the language.

                                                            1. 1

                                                              Thanks for that. I’ve been referred to that book right along but I felt like I needed a gentler introduction to the language, which is why I turned to Tour but I will definitely check the Meyers book out for the copy/move stuff. I have a Safari membership so all you can eat books for one price. (I love it :)

                                                              That section of that book, incidentally, also has one of my favorite examples of how complicated C++ has gotten:

                                                              Interesting that you point this out. I’m actually finding that C++, at least at the beginner level I’m interfacing with it, feels considerably less complex and more abstract than the C++ I last touched in the early 90s.

                                                              I’m sure this is because I haven’t had to delve into the inner workings of STL and the like - I’m guessing the … uh… water? Gets MUCH deeper there :)

                                                              1. 2

                                                                The language doesn’t grow less complex for having more high level abstractions.

                                                          1. 2
                                                            • Doing this year’s Advent of Code puzzles in C++ since I’m teaching myself that language. GitHub repo here

                                                            • Getting past being horribly stuck in chapter 5 of Stroustrup’s “A Tour of C++” book. I suspect I’m just getting lost in the constructor / destructor and reference versus pointer syntax because it’s been so long since I’ve actually used it.

                                                            1. 2

                                                              “called microVMs, which provide enhanced security and workload isolation over traditional VMs”

                                                              “uses the Linux Kernel-based Virtual Machine (KVM) to create and manage microVMs”

                                                              That hasn’t been secure so far. At least they’re making smaller VM’s.

                                                              1. 1

                                                                The devil’s in the details. Do a little bit of detail diving here and I think you’ll see they’re at least thinking very hard about the common problems in this space.

                                                                1. 3

                                                                  Ok, since on my PC, I can dig up my links. Been following secure virtualization going back to IBM Kernalized VM/370. Certain practices consistently got good results in NSA pentests. Others, esp UNIX-based systems, got shredded with more problems over time. Most important was a tiny foundation enforcing security with rigorous assurance that it did exactly what it said with little to no leaks. Examples, focusing on architecture/rigor, include VAX VMM Security Kernel (see Layered Design and Assurance sections), Nizza Architecture that Genode is based on, this work which is either INTEGRITY-178B or VxWorks MILS, NOVA microhypervisor, and Muen SK. All of these were built by teams big companies could afford with several open source for building on and/or commercially available for license. During evaluations, this design style had lower numbers of vulnerabilities and leaks along with better containment of user-mode problems. So, it’s lens I look through evaluating VMM’s.

                                                                  This work starts out by saying it uses Linux and KVM. Unlike the above, Linux has a huge pile of code in kernel mode with a non-rigorous development process high-assurance security predicted would lead to piles of bugs, a portion vulnerabilities in kernel mode (read: worst-case scenario). Here’s a recent look (slides) at the state of Linux security if you want to see how that panned out. Although Linux integration disqualifies KVM immediately, I did look for attempts by high-security folks at securing KVM or just reducing complexity (step 1). I found this which rearchitected it to reduce privilege of some components without negative impact to Trusted Computing Base (TCB). So, it’s feasible.

                                                                  If you looked into the details already, you can quickly assess the security by just looking to see if they broke Linux, KVM, and their additions into deprivileged pieces running on a separation kernel or VMM. Then, applied full-coverage testing, a covert channel analysis, static analysis by good tools, and fuzzing. This on both individual components and models of interactions with careful structuring. Each of these usually find bugs or leaks, often different ones. If you don’t see these things, then it must be assumed insecure by default until proven otherwise with rigorous methods that were themselves repeatedly proven in the field. And even then, it’s no secure so much as having lower probability of failure or severe failures over time, maybe none if lucky.

                                                                  I hope someone surprises me with evidence Firecracker could pass an EAL6+ evaluation at High Robustness. Although usually full of jargon, these slides use more English than jargon describing evaluation requirements with some examples of how NSA/labs rated different things. High Robustness adds a bunch more but I couldn’t find non-jargon link in time I had. Just imagine a higher bar. :)

                                                                  1. 2

                                                                    Been thinking about your response, and I have some questions.

                                                                    I am not a security expert, and I am so clueless I don’t even know what an EAL6+ IS so I am in no way challenging you, but I am wondering:

                                                                    Do your comments take the intended use case for Firecracker into account? These aren’t traditional ‘heavyweight’ VMs that are intended to be long running. They’re intended to be used in serverless, where each VM spawns with a single ‘function’ or maybe application running, and the VM lasts for the lifetime of the function invocation or application run, and then evaporates.

                                                                    My extremely naive understanding of a lot of the security problems around VMs stem from using inherent vulnerabuilities in virtualization architectures to get privesc in a VM and then be able to control whatever resources are running in there, but how useful could that actually be if the VM say only lives for the lifetime of a single HTTP response?

                                                                    1. 1

                                                                      I know so little about container tech and other highly-hyped products that I have to be careful commenting on them. There’s levels of analysis to do for tech security: general patterns that seem to always lead to success or failure; attributes of the specific work with a range of detail. I was using lens of general patterns of what prevented and led to vulnerabilities when looking at details of the work. I saw two, risky components immediately. The TCB principle of security means the solution is only as secure as its dependences and how it uses them. That’s before the solution itself. I don’t know much about the use case, the new stuff on top, etc. I just know the TCB poisoned it from the start if aiming for high security.

                                                                      “but how useful could that actually be if the VM say only lives for the lifetime of a single HTTP response?”

                                                                      Well, the concern was the underlying primitives could be vulnerable. So, the enemy uses one or more sessions to exploit them, the exploit escapes the isolation mechanism, the malicious code now has read (side channel) or write (backdoor) access to all functions, and it can bypass their security. Their security might mean reading sensitive data, corrupting the data, or feeding malicious payloads to any clients trusting the service (it said HTTPS!). What risk there is for a given application, HTTP response, and so on varies considerably. It will be acceptable for many users as cloud use in general shows.

                                                                      The amount of people using these platforms means they either don’t know about the risk, don’t care, or find the current ones acceptable cost-benefit analysis. That last part is why I bring up alternatives that were more secure to better inform the cost benefit analysis. An easy example is OpenBSD for firewalls or other sensitive stuff. Genu’s use it if someone wants shrink-wrap. It rarely gets hacked. So, you can focus on your application security. Firecracker uses Rust plus careful design to reduce application security risks. End-to-end Signal instead of text messages is another easy one. These are examples of blocking entire classes of problems with high probability using general-purpose techniques that (a) are known to work and (b) don’t raise costs prohibitively.

                                                                      For separation kernels, they used to cost five to six digits to license depending on organization but this is Amazon, right? Could probably hire experts to build one like Microsoft, IBM, and some small firms did. Or just buy the owner’s company getting a start on a secure VMM plus a bunch of other IP they can keep licensing/using. ;)

                                                                      1. 2

                                                                        The amount of people using these platforms means they either don’t know about the risk, don’t care, or find the current ones acceptable cost-benefit analysis.

                                                                        Yes. I think this is the key. The risks are acceptable given most end users cost/benefit analysis. Security is on a sliding scale that balances out against usability/convenience. Firecracker would appear to solve a real problem people are having in that prior VM implementations were too heavyweight to be used in any kind of serverless context.

                                                                        The fact that, comparatively speaking, these VMs are, if you’re correct, relatively insecure should definitely be kept in mind, but that doesn’t IMO lessen the perceived benefit of being able to have considerably more isolation than previous container technology provided without the traditional startup / shutdown time that made VMs a deal breaker in this particular context.

                                                                        So, to re-iterate, I’m not arguing with you, I’m asking that you consider whether or not your reservations about the security of this technology might be more or less useful in evaluating whether it makes sense to adopt this technology given the very particular context and use case it’s trying to support.

                                                                        1. 1

                                                                          So, to re-iterate, I’m not arguing with you, I’m asking that you consider whether or not your reservations about the security of this technology might be more or less useful in evaluating whether it makes sense to adopt this technology given the very particular context and use case it’s trying to support.

                                                                          I figured we were just having a discussion rather than arguing since your wording was kind. :)

                                                                          There’s two angles to that which stem from the fact that Amazon isn’t telling its customers that other tech exists that’s way more secure, they’re ignoring it on purpose to maximize every penny of profit, they’re pushing insecure foundations for critical apps, and trying to grab markets worth of money out of that which, again, won’t improve the foundations. If customers hear that, would they (or those use cases):

                                                                          1. Be cool with that in general and still buy Amazon happily?

                                                                          2. Think that’s fucked up before:

                                                                          2.1. Buying something secure based on their recommendation after telling Amazon they lost business cuz of this. Update their offerings.

                                                                          2.2. Grudgingly buy Amazon due to attributes it has, esp price or a needed capability, that the more secure offerings don’t have.

                                                                          I’m very sure there’s a massive pile of people that will do No 1. I’m just also pretty sure there’s people that will do either 2.1 or 2.2 based on fact that companies and FOSS projects with security-focused software still have customers/users. I can’t tell you how many. I can tell you it’s a regrettably tiny number so much that some high-security vendors go out of business or withdraw secure offerings each year. Some stick around with undisclosed numbers. (shrugs)

                                                                          I thought it was especially important to bring up possibility of No 2. The existence of high-security techniques that small, specialist teams can afford is something unfamiliar to most of the market. My feedback on HN and Lobsters, highly-technical forums, corroborates that. You can bet the big companies did that on purpose, too, for their own profit maximization. So, that’s where folks like me come in letting product managers, developers, and users know there were alternative methods. Then, they get to do something they reasonably couldn’t before: make an informed choice based on the truth, not lies. An example of that happening, probably more for predictable performance, was the shift of some of cloud market to SoftLayer and other bare metal hosting despite good VM’s being cheaper.

                                                                          They might make the same choice, esp if market doesn’t have better offering. At least they know, though, that maybe dedicated hosting w/ security-focused stacks makes more sense if they care about security. Many of the micro- and separation-kernels supported POSIX or Linux VM’s, too. So, they could even use the untrusted, but minimalist, stuff to reuse legacy code/apps even if it wasn’t good at stopping malicious neighbors. :)

                                                                2. 1

                                                                  I think they mean that their approach is more secure than containers? Or isn’t actually that the case?

                                                                  1. 2

                                                                    That’s what they’re asserting, and from where I sit it’s very likely. All you have to do is read anything written by Dan Walsh in the last few years to understand that securing workloads in container enviironments is entirely possible, but decidedly non trivial, because of the relative lack of isolation you’re actually getting from containers when they run in your garden variety Linux system.

                                                                    Dan has long been a proponent of container hosts running things like SELinux to increase the isolation potential, but as anyone who’s ever tried turning it on knows, administration of SELInux systems can be challenging for the uninitiated.

                                                                    1. 1

                                                                      That could be true.

                                                                  1. 5

                                                                    As Linux gets more and more corporate and less targeted for the desktop, having a light-weight and responsive OS is enough to make it unique.

                                                                    I do patch my Linux with the MuQSS scheduler, the best thing for Linux responsiveness, but I was recently told the Haiku one is essentially the same. This is awesome to me.

                                                                    There is a lot and more in apps and hardware support that Heroku would need for me to switch over, but it seems like a cool project.

                                                                    Does it do virtual desktops, btw?

                                                                    1. 4

                                                                      MuQSS schedule

                                                                      I heard there was a better scheduler for desktop use. Didn’t know the name. Thanks for the tip.

                                                                      1. 3

                                                                        Does it do virtual desktops, btw?

                                                                        It does.

                                                                        The things that it’s missing that I would need to make it my daily driver:

                                                                        Minimum:

                                                                        • Support for multiple monitors (was in the works at one point, may be there now)
                                                                        • Support for videoconferencing and screen sharing in Google Meet (long shot because Google barely even supports Firefox there)
                                                                        • Full disk encryption (there’s an encrypted block device driver in the tree but last I checked it was moribund)

                                                                        Optimal:

                                                                        • The ability to run virtual machines at full speed (there’s qemu but without OS support it’s doing true emulation and is unusably slow for my purposes)
                                                                        • The ability to use Firefox Sync

                                                                        I’d say BeOS is my favorite operating system of all time, but I can’t quite bring myself to say it since AmigaOS existed.

                                                                        1. 3

                                                                          I do patch my Linux with the MuQSS scheduler, the best thing for Linux responsiveness, but I was recently told the Haiku one is essentially the same. This is awesome to me.

                                                                          I don’t know a lot about the MuQSS scheduler, but from reading over the introductory document, it indeed looks pretty similar to Haiku’s. (I wonder where you read this previously, though?)

                                                                          There is a lot and more in apps and hardware support that Haiku would need for me to switch over, but it seems like a cool project.

                                                                          What would those be? Most minor tools are easily ported at this point.

                                                                          1. 3

                                                                            IRC, oftc.net, can’t remember why I joined Con Kolivas’ channel #ck, but there. I consider him a friend after all this time and tested some of his prototypes way back.

                                                                            The Godot engine would be one big thing.

                                                                          2. 3

                                                                            Virtual desktops: yes.

                                                                            Linux gets more and more corporate and less targeted for the desktop

                                                                            Let’s hope the competitors get better in quality. I doubt I will want change to Haiku unless something really bad happens in the nix world, but hopefully its presence will make everyone else better nonetheless.

                                                                            1. 3

                                                                              Disk encryption, does it have that? Password-protected screensaver?

                                                                              1. 4

                                                                                BeOS had a password-protected screensaver.

                                                                            2. 2

                                                                              How does mainstream GNU/Linux get worse?

                                                                              1. 10

                                                                                NB: this turned out to be a poettering rant.

                                                                                adding ever more complicated layers onto complicated layers to reinvent the wheel. most things should be done a few layers down, not by adding a few layers on top. this while having the same functionality 10 years ago, which most of the time was working as good as today, only less complicated and prone to break. the sound stack is just horrible, the most sane thing would be to throw out alsa and pulseaudio and use oss4, which implements most of the features. session and login management is also insane, a mess of daemons connected via dbus of all things. systemd people constantly reinventing square wheels (resolved, really?). while i’m at it, ps found a now one i didn’t know about: “rtkit-daemon”, fixing problems i don’t have, running by default.

                                                                                i know, it’s open source, i can write a patch.

                                                                                1. 3

                                                                                  I’ve been geeking out on schedulers for a long, long time and every encounter with vanilla Linux on a heavily-loaded box has been awful. It might behave better now, but that would be by very complicated code and bizarre special-case settings.

                                                                                  As a simple user, I just use the -ck patch set and ignore the horrors of the sound stack, systemd, Linux Foundation’s corporate politics, cgroups and what have you.

                                                                                  I mean, it kinda still works, but sometimes it feels the best desktop-experience parity with Windows was reached 20 years ago, if you exclude hardware support and games, and or with gnome3-type shit and everything got worse.

                                                                                  I’m not positive the desktop experience is as good as it gets but I am positive it’s no one’s priority.

                                                                                  1. 4

                                                                                    I actually like Gnome 3 UI-wise, but the Linux scheduler seems to be more horrific than it used to be, and I remember it being bad a decade ago. I’ve had systems where X11 chugged hard and took 30 minutes to get to a vt when Firefox was stressing the system, when Windows on even more decrepit hardware was slow, but at least felt usable due to seemingly better scheduling - and it didn’t matter what WM you were using.

                                                                                2. 1

                                                                                  I’m not seeing Linux move away from the desktop at all. In fact I’m seeing more investment in the LInux desktop than ever.

                                                                                  It’s just that they’re investing in the wrong (from my selfish stance :) desktop environment :)

                                                                                  1. 2

                                                                                    they’re moving away from the desktop and towards tablets, even though linux doesn’t run on any

                                                                                1. 8

                                                                                  The way that Amazon has come barging in to monopolize on hosting solutions for every piece of open source infrastructure under the sun is more than a little unnerving. I wonder what the next generation of software licenses will look like as a result.

                                                                                  This looks like it only supports a subset of Kafka’s feature ATM, it will be interesting to start playing around with.

                                                                                  1. 3

                                                                                    Why unnerving? Wanna run your own? Go do that!

                                                                                    I’ve been reading for Y-E-A-R-S about people who don’t take the inherent difficulties in running an HA service into account before setting such things up, and as a result doing it badly and feeling a lot of pain when they lose data.

                                                                                    We may be the evil empire but generally speaking we know how to do high availability and we don’t lose data.

                                                                                    1. 5

                                                                                      Why unnerving? Wanna run your own? Go do that!

                                                                                      Confluent did do that. AWS just undercut their business, and will probably never give back to the open source community in the way that Confluent has. That’s the business game of course, and I’m not begrudging that – but the only reason AWS can do it so easily is because they have eaten up so much of the cloud market already and customers will want to have an “Amazon native” solution.

                                                                                      I don’t even really see AWS as being an evil empire in the way that some might – my view of capital is a little more nuanced that that. I admire a lot of the engineers and technology that have come out of the business, and I’ve even thought about working there. But the trend of companies contributing nothing to open source projects and making millions off of selling it as a service is a bad one IMO. That’s the trend of this modern tech boom – I think the next generations of open source technologies may end up with governance models like that of Clojure as a result. Whether or not that’s a net good is still left to be seen.

                                                                                      EDIT: I just realized I may have interpreted your response incorrectly – you might have been saying “You want to run Kafka on your own? Go do that!”

                                                                                      I think you’re exactly spot on with saying that many companies don’t have the expertise to run these technologies at scale, and AWS provides value by hosting it for them – my comment is still evocative of what I meant, just wanted to acknowledge a possible misreading. :P

                                                                                      1. 2

                                                                                        inherent difficulties in running an HA service

                                                                                        Yes. The questions “how will I run this in Production?” (which is really shorthand for “How will I run this so it’s available?”, “How will I backup/restore/recover?”, and “How will I handle upgrades?”) should be the question immediately after “does it do what I want it do?” My experience is that if it’s not asked until the prototype is done, you can expect a lot of pain.

                                                                                      2. 3

                                                                                        I wonder what the next generation of software licenses will look like as a result.

                                                                                        Wouldn’t this be the exact use-case for Affero GPL? “The GNU Affero General Public License is a modified version of the ordinary GNU GPL version 3. It has one added requirement: if you run a modified program on a server and let other users communicate with it there, your server must also allow them to download the source code corresponding to the modified version running there.” https://www.gnu.org/licenses/why-affero-gpl.html

                                                                                        I totally agree with you that it’s a problem if big enterprise companies integrate open source software into their business and do not return their advances to the open source community.

                                                                                        1. 1

                                                                                          if you run a modified program on a server and let other users communicate with it there, your server must also allow them to download the source code corresponding to the modified version running there.

                                                                                          Honest question - what if affero gpl software is bound to local port and users are communicating with e.g. nginx which in turn communicates with affero gpl sw?

                                                                                          1. 2

                                                                                            Interesting question, indeed. First, people seem to agree that communication to a REST endpoint does not lead to both softwares being linked. This means that a client can be proprietary or MIT, even if the server is AGPL.

                                                                                            Thus, you could have a MIT licensed/proprietary product on the public IP. I cannot see neither a clear yes nor a clear no here, but I tend to “when it goes out, you have to publish it”. Let’s consider both extreme cases:

                                                                                            Clear “yes, AGPL on local port with nginx reverse-proxy does eliminate AGPL”:

                                                                                            This is a standard setup for most services. I run all my web services behind a common nginx which handles routing the requests to the right service based on host name.

                                                                                            Or you might have nginx as a load balancer (or haproxy or anything else) and they forward the requests.

                                                                                            Thus, I cannot believe that in this situation you could get around AGPL (or the AGPL would be very easily broken).

                                                                                            Clear “no, even when running behind other Software you have to follow AGPL”

                                                                                            The extreme case for a situation like this would be that you provide a software product and use “my-very-best-text-search” for the search functionality. However, 99% of your software is not about text search, but something totally different (let’s say management of financial values of furniture - but for some reason you definitely need “my-very-best-text-search” (AGPL license). In your search functionality users input only a text, but this is only 1% of the feature set that “my-very-best-text-search” actually would offer (i.e. you only expose 1% of the functionality of AGPL software and this then in turn makes only 1% of your product).

                                                                                            If we say that you always have to publish source code, you would have to do so even in this case.

                                                                                            I think - since we said above that AGPL does not extend through API boundaries - this might even be a realistic option. You’d only have to publish the source code for the “my-very-best-text-search” (there is even discussion whether you always have to publish it or only if you modified it yourself), not the source code of your own software.

                                                                                            1. 1

                                                                                              Just found out that MongoDB created SSPL, because they say major cloud companies are testing the limitations of AGPL and how far they can go. Thus, SSPL is rewriting the whole section about code on servers and users interacting with servers.

                                                                                              https://www.mongodb.com/licensing/server-side-public-license/faq

                                                                                        1. 17

                                                                                          I used Mastodon for about six months … Then stopped, because that sort of social networking didn’t make me a happier or better person, regardless of platform or community.

                                                                                          1. 7

                                                                                            You know I hear this a lot.

                                                                                            My Twitter feed isn’t full of trolls, and neither is my Mastodon feed. I find and interact with people who want to have meaningful, interesting and civil discourse on a variety of topics.

                                                                                            Sorry your experience was so different.

                                                                                            1. 6

                                                                                              This is so true. Took me way too long to figure this out. Also, I never follow coworkers — it ruins the relationship for me to get inside their head that much.

                                                                                              1. 2

                                                                                                That’s generally good advice. For me it depends. I can usually draw a bead on the maturity level of the people in question, and I follow the ones I trust.

                                                                                                That may bite me in the posterior someday. Hasn’t yet.

                                                                                                1. 2

                                                                                                  I’m really glad I run my own mastodon instance. The major instances have massive block lists and I’ve found a lot of people on blocked instances to be really cool. There are other people who have mobbed me and they couldn’t even see counter arguments because they came from blocked instances; they were in an echo chamber.

                                                                                                  If you want to use Mastodon, I suggest you start your own instance (or you can PM me if you want an invite to mine).

                                                                                                  I’m at @djsumdog@hitchhiker.social

                                                                                                2. 4

                                                                                                  I think that Twitter, Mastodon, etc. can have ill effects even if you’re only following people you like and no one is harassing you. (And of course, it’s surprisingly difficult sometimes to realize that following a certain person is not bringing you joy and that you should stop following them.) Some people—myself included—get a little twinge of pleasure any time someone likes/favourites/boosts/retweets their stuff, and over time that can make posting feel a little bit like a slot machine. Some people are less prone to that, but for the rest of us it’s not a very healthy dynamic for a social network.

                                                                                                  1. 2

                                                                                                    Except that, if you want to reduce it to the neuro-psych effects of interacting online, what’s so different about this venue? We crave upvotes and standing in the community.

                                                                                                    Admittedly unlike failbook there’s no giant MegaCorp using our data in immoral ways, but that’s the same for Mastodon as well.

                                                                                                    So, basically, I don’t see your point at all. Humans crave social approval. it’s how we’re wired. Companies like Failbook and Twitter leverage this in ways that end up being morally questionable (and in FB’s case just straight up evil) but when you take them out of the equation, your point falters IMO.

                                                                                                    1. 1

                                                                                                      That’s a good point. Perhaps another part of it is that Twitter and Mastodon also encourage (both socially and through their UX design) short, tossed-off posts. Especially back when Twitter only gave you 140 characters, there wasn’t much room for any kind of nuance or subtlety; it was way easier to say something snappy that would garner you a bunch of likes than to engage in a conversation with any level of depth. Lobsters does show your karma up there in the corner, but it also encourages you to write long posts (that are displayed in threads!) and I think that’s an important part of building a discussion-oriented community.

                                                                                                      1. 3

                                                                                                        Ah, there’s the crux of it!

                                                                                                        You cite “building a discussion oriented community”.

                                                                                                        To my mind, things like Mastodon and Twitter for that matter aren’t that at all. They’re more like a crowded cocktail party where people get into a crowded room and chatter. Clumps form and topics are discussed, then disband as another hot topic of interest pulls people in a different direction.

                                                                                                        I like gatherings like that. I feel like they have a lot of value and are a particular type of social inercourse I quite enjoy. If you don’t, that’s totally cool! Nobody says you have to :) But that doesn’t make them bad.

                                                                                                        1. 2

                                                                                                          Makes sense! And to each their own :) It’s unfortunate, of course, that so much of our public discourse has found itself shoehorned into a space designed for a more intimate cocktail party…

                                                                                                      2. 1

                                                                                                        I need to periodically take breaks from lobste.rs because the upvote game is getting me too worked up – don’t you?

                                                                                                        That very rarely happens to me on mastodon, because I hide all notifications except mentions in order to actively prevent getting sucked into a popularity-contest mentality (and disable desktop/push notifications and notification sounds altogether). That’s not the default, but it’s explicitly supported by the settings (while doing the same on twitter as a non-bluecheck requires a browser extension).

                                                                                                        1. 1

                                                                                                          I don’t really get ‘worked up’ around the voting, but I do find myself investing more ego in it than I like.

                                                                                                          Mostly this manifests when I feel like someone has flagged a comment unfairly :)

                                                                                                          1. 3

                                                                                                            I find that over here, I get fixated on checking whether or not posts I think are good are ‘doing well’ – primarily because that information is easily accessible / even visible when I’m not looking for it in some cases. It produces stress I don’t need in my life. If I had an extension that removed upvote counts & karma from lobste.rs posts entirely, I would use it.

                                                                                                    2. 2

                                                                                                      Same here. I think of my twitter/mastodon feed as a kind of soup. I craft it into something enjoyable by being selective about the ingredients that go into it. https://mastodon.xyz/users/donpdonp/

                                                                                                    3. 3

                                                                                                      To expand a little now that I’ve thought about it more… It wasn’t that I didn’t follow good people, as I certainly tried to. It was more that I couldn’t filter by topic, at least not well. This seems like the big difference between “social media”, where you follow people and have to bear whatever they feel like saying, and more “old-school” group communication like Usenet, fora, or their successor Reddit (or Lobste.rs). There’s people out there I really like and enjoy talking to, but I really don’t feel like wading through, say, their stories of Counterstrike triumph to get to the useful and wise things they have to say about software consulting.

                                                                                                      The number of people out there where I honestly want to listen to everything they say is quite small.

                                                                                                    1. 6

                                                                                                      My god I hate Eric Raymond’s writing style so much. There are some interesting tidbits in here though that are worth reading.

                                                                                                      What bugs me the most is the sneering attitude. So Go is mostly faster than Python. OK, and? So you found a better implementation language for your problem set. Great! Why does that mean that Python is on the wane?

                                                                                                      grumble

                                                                                                      1. 3

                                                                                                        In fairness, ESR does explain that here (I generally dislike him and all his works and thoughts). His point is that python is both undergoing a language split (which I think is healing) and increasingly is falling behind things that can easily make use of parallel processing.

                                                                                                        This is sort of true, and I think python will lose share for the kinds of tasks where modern system programming languages fill the gap.

                                                                                                        1. 3

                                                                                                          How in touch with the Python community are you? There has been a positively enormous amount of work around async programming, both in the core language and in all the surrounding frameworks in various areas.

                                                                                                          I’m not suggesting you’re wrong, I don’t have a really good way to draw a bead on adoption in specific areas where that kind of parallel processing performance is a solid requirement.

                                                                                                          I think the “language split” argument doesn’t hold water. Sure, Python 3 introduces some syntax changes, but it’s nowhere near a profound enough change to call it a “split”. ESR just got all crotchety about Python 3 and refused to jump the gap to the new version.

                                                                                                          One area where Go does have a real advantage is packaging, and that’s beyond debate IMO.

                                                                                                          1. 2

                                                                                                            First of all, it’s entirely possible that ESR is wrong.

                                                                                                            positively enormous amount of work around async programming

                                                                                                            But those don’t use multiple cores unless using process-pool backed options.

                                                                                                            I’ve generally used futures-based concurrency in python, and it works for worker-type cases. I haven’t really used any of the modern system programming languages in order to compare intelligently.

                                                                                                            1. 4

                                                                                                              That’s correct. The Python async solutions don’t use multiple cores because of the GIL.

                                                                                                              Let me drop back and punt here: What bugs the CRAP out of me with ESRs posts is his attitude.

                                                                                                              I can’t think of anyone who’d argue that Golang is a better choice for reposurgeon than Python. How does “I found a better tool for my use case” translate into “The waning of Python”?

                                                                                                              Also, what’s wrong with using the process as the unit of tasking? Python has superb support for that.

                                                                                                              1. 3

                                                                                                                Pundits gonna pundit. Being opinionated and cocksure is kinda ESR’s thing.

                                                                                                                Don’t worry about what one has-been internet denizen thinks, concentrate on what’s good instead.

                                                                                                      1. 4

                                                                                                        I’m @yumaikas@mastodon.social. I tend to have a technical focus on Mastodon, especially things to do with stack based languages.

                                                                                                        1. 1

                                                                                                          very cool. Like Factor or other FORTHs?

                                                                                                          1. 4

                                                                                                            Yep. And one of my own, called PISC

                                                                                                            1. 1

                                                                                                              Neat! I sometimes wonder why stack based programming languages sort of got left behind by the main stream. Maybe the perception is that they were size / performance oriented and it’s not necessary in this era of 1TB onboard RAM caches in everything? :)

                                                                                                              I’ll never forget the first time I played with the FORTH interpreter in the Sun boot monitor.

                                                                                                              1. 2

                                                                                                                I have some guesses. They forced everything to work like a stack for one. The highest-performance architectures didn’t. They had a mix of design styles. Even Intel/AMD build their stack operations on top of micro-architecture that’s more RISC-like. Math-heavy stuff was more about arrays and vectors, esp hardware acceleration. LISP, the most productive language, had different primitives (esp lists). The best languages for verification focused on expressions, whiles, and simple functions. Compilers finding SSA form, which is functional-like, to be beneficial might have had an effect.

                                                                                                                All kinds of ecosystems (aka bandwagons) were going in a different direction. Going in same direction gave their benefits. Especially, compiler optimizations for C, Moore’s Law with EDA tools, and high compatibility with mainstream software (integration benefits). The main promoter of stack-only architectures, Chuck Moore, abstained from using most of that to focus on his own style (eg 18-bit, not 16 or 32) running his own software stack on old nodes (his limited EDA) all built ground up. As in, you’d have to accept both stack processors and Moore’s preferences which gave up desirable things. People learning about it through him probably didn’t want that tradeoff. I mean, if you know Moore’s Law, why wouldn’t you want a supplier to run his Forth CPU’s through an optimized process at latest nodes? So, that was that with him continuing to do niche work, like energy-efficient embedded, using his methods. Mainly GreenArrays.

                                                                                                                However, people who learn and keep the advantages of his work combining them with modern ideas in interpreters, compilers, and EDA might achieve some great results. Who knows… I encourage that experimentation.

                                                                                                                1. 1

                                                                                                                  I think, more than anything, is that stack languages are so easy to start implementing that it’s easier to build your own than to use an existing one. If anything, it’s like the Lisp curse, but more so.

                                                                                                                  Postfix-style is also a really alien syntax for most current programmers, and has just enough of a learning curve that most people file it under “probably not worth the time”.

                                                                                                                  Also, many stack based languages make it easy to accidentally blow the stack, and otherwise tend towards being very fiddly.

                                                                                                                  Still, there are quite a few, such as 8th, Factor and PostScript, that tread a different route. Overall, I think stack based languages are here to stay, but will stay niche pretty much indefinitely.