1.  

    what projects are built on libssh?

    1.  

      GitHub uses it but says they do so in a way that didn’t expose this vulnerability. There’s some discussion in an Ars Technica article that suggests using libssh in server mode (vs. the client-side library) is uncommon.

      1. 1

        Hunkering down indoors due to a storm. No real problems here though; power is still on, and my flat seems to be coping with ~50 mph winds fine. Hopefully will get to reading something on my to-read list.

        1. 1

          Be safe, especially if you have to get out candles.

        1. 13

          The OS itself in my opinion is not ready for widespread desktop usage…

          Would I install it on my granma’s computer? Most likely not, but nor would I GNU/Linux. However, it is just right for my kind of usage (workstation on a Thinkpad Carbon Gen 3).

          OpenBSD is by far the most stable and predictable OS I am running (that includes OSX and GNU/Linux) and I am running -current. It does everything that needs being done and does it well.

          I agree with OP that one has to like configuring stuff by editing files and reading manpages on the CLI. That being said, configurations are usually pretty terse, man pages well detailed and examples in the man abundant.

          OpenBSD is a powertool for powerusers. It’s not being developed for mass market appeal and that’s actually one of its most attractive features.

          1. 11

            Actually, it is exactly the system I would install on my granma’s computer: A clean OpenBSD desktop with two icons: “Internet” and “Mail”.

            She will never get a virus, break it, or fail to fake windows phone scams.

            My mother ran a Linux box for many years before jumping to a mac, and she was happy. Everything worked, nothing ever broke. It was predictable. Nowadays Linux is less predictable, especially after an upgrade, but OpenBSD is :)

            Edit: However I wouldn’t recommend OpenBSD to a “regular user” friend.

            1. 7

              Actually, it is exactly the system I would install on my granma’s computer: A clean OpenBSD desktop with two icons: “Internet” and “Mail”.

              Geez, what an assumption ;), maybe grandma is a UNIX wizard and uses qutebrowser and mutt and launches them from the terminal.

              At any rate, as far as I understand from various posts (haven’t tried OpenBSD since the early 00s), “Internet” would be very slow. Moreover, she would not be able to watch Netflix, since Widevine is not supported on OpenBSD. Oh, and she probably can’t Skype with her grandchildren, etc.

              Do you non-tech beloved ones a favor and buy them an iPad. Despite the problems of Apple or Apple hardware, it is the most secure consumer platform, that gets updates for at least half a decade, and probably supports any popular application they’d want (Skype, Netflix, Youtube, Facebook, etc.).

              1. 6

                An iPad would work well for some people, but for many of my older relatives, they have trouble with the touchscreen input. They can all type reasonably well, since they’re of a generation where Typing was an entire course you took in school, but find touchscreen-typing to be frustrating. As far as something similar but with a kb, not sure whether iPad+bluetooth kb, or just a MacBook would be easier.

                1. 4

                  An iPad would work well for some people, but for many of my older relatives, they have trouble with the touchscreen input. They can all type reasonably well, since they’re of a generation where Typing was an entire course you took in school, but find touchscreen-typing to be frustrating.

                  That’s interesting and a good point. Though it does not apply to everyone. My mother is in her sixties and never used a computer until 5 years ago (well, except for a domain specific-terminal application when she worked in a library in the 90ies). Despite doing some courses, etc. she always found computers too complex. However, since my dad bought an iPad for her ~5 years ago she has been using it very actively. She is able to do everything she wants - iMessaging, sending e-mail, and browse the web. Later, she also started using a smartphone, since ‘it is just a small iPad’.

                  At any rate, iPad + KB vs. MacBook would strongly depend on the person and how much they want beyond a simple media consumption device. Of course, if someone is going to compose documents on a device all day, an iPad is a bad option.

                  Of course, when it comes to typing you don’t want to buy a MacBook 12”/Pro now either ;). The butterfly keyboard is terribly unreliable (my 2016 MBP’s keys are sticky all the time).

                  1. 1

                    Sounds like my grandmother. She does almost everything through a web browser. I had her use Ubuntu briefly. She had no trouble with using it but just prefered the look and feel of Windows. So she went back. I still get malware calls on occasion.

                  2. 2

                    On phones, touch typing sucks for me cuz I have shaky fingers. Miss the keys and have to backspace a lot. Happens less on tablet with big keys. Doesnt happen with a physical keyboard regardless of size. I think it’s the extra, tactile feedback my brain gets from raised keys.

                    1. 1

                      I use an iPad (with a bluetooth keyboard) while on vacations as a substitute laptop. And with an SSH client program I can even do development on a remote server [1].

                      [1] I may not like it that much, as the bluetooth keyboard I use is hard for me to use [2]. But I can do it.

                      [2] Even the keyboards on Mac laptops suck. I generally only use IBM Model M keyboards, but taking one on vacation is a bit overkill I think.

                    2. 2

                      Geez, what an assumption ;), maybe grandma is a UNIX wizard and uses qutebrowser and mutt and launches them from the terminal.

                      Sounds like OpenBSD would work even better for your grandmother than we first thought!

                      “Internet” would be very slow.

                      Why would that be?

                      Do you non-tech beloved ones a favor and buy them an iPad. Despite the problems of Apple or Apple hardware, it is the most secure consumer platform, that gets updates for at least half a decade, and probably supports any popular application they’d want (Skype, Netflix, Youtube, Facebook, etc.).

                      Sorry but no way would I ever subject anyone I know to using an iPad. Not only is their hardware crap (overheating the moment you do anything with it), and not only is their software locked-down-to-the-point-of-unusably crap, but tablets in general are absolutely pointless devices that have no reason to exist in the home. Tablets are great if you’re an engineer that needs to have a lightweight device with a good bright screen that they can use to look at plans on site. For my mother? Why wouldn’t she just use a laptop?

                      Want to make a spreadsheet of your expenses? Nope, sorry, tablet spreadsheet software is garbage. Hope you like having a keyboard pop up over whatever you’re doing every time you want to input anything. Hope you like being unable to copy a row in a single drag of the mouse like you can on desktop, instead having to apparently click, copy, and manually paste into each cell. etc. They’re just bad devices for doing anything productive with a computer, and contrary to popular belief most people want to sometimes do something productive with their computer, whether it’s making a spreadsheet of their expenses, writing a letter to the editor of their paper, making a newsletter for their knitting association, or whatever. Sure they also want to watch Netflix, but that doesn’t mean that all they want to do is watch Netflix.

                      1. 2

                        Why would that be?

                        https://www.tedunangst.com/flak/post/firefox-vs-rthreads

                        but tablets in general are absolutely pointless devices that have no reason to exist in the home. Tablets are great if you’re an engineer that needs to have a lightweight device with a good bright screen that they can use to look at plans on site. For my mother? Why wouldn’t she just use a laptop?

                        Both my parents and wife are completely happy iPad users. Outside work, my wife usually uses her tablet, despite having a laptop. They are safe, fast and effortless (require virtually no tech support). Interestingly, I as an engineer don’t need or want one. I had an iPad and Nexi on several occasions, but would never use them.

                        YMMV

                      2. 1

                        ipads serve ads and manipulate you. allowing a manipulator access to a loved one doesn’t sound like a favor, not for the loved one at least.

                        1. 5

                          You will have to expand on that statement. The iPad I’m using to type this does not serve any ads outside apps. Nor do I feel manipulated.

                          1. 0

                            ads inside apps are still ads, as are push notifications from apps. and of course ios/app developers aren’t trying to make you feel manipulated.

                          2. 3

                            What ads? Paid apps typically don’t show ads. Besides that Safari on iOS has a content blocking API. Install e.g. Firefox Focus, which is a Safari ad blocker (besides a privacy-focus browser), and websites in Safari are ad-free.

                            I have an iDevice (iPhone) and I never see an ad.

                            1. 1

                              youtube and facebook both show ads, and many facebook stories are ads even if they don’t look it. you can circumvent that on an ipad? could your grandma?

                              1. 3

                                What exactly does that have to do with the iPad? Facebook and Youtube are hardly specific to the iPad. Circumvention being ad-blocking? Won’t block facebook stories that are ads.

                                1. 1

                                  the ipad has facebook and youtube apps, as /u/iswrong pointed out.

                                2. 1

                                  Well, the comparison here is unfair. In OpenBSD they wouldn’t even have a Facebook or Youtube app. If they’d use the browser to access Facebook/Youtube in OpenBSD, there would be no difference, since Safari can also do ad blocking. Plus they would get hardware-accelerated video ;).

                                  1. 1

                                    right, BSD and Linux don’t have apps, so their utility isn’t tied to apps which show ads and manipulate you. OpenBSD has alternatives to facebook and youtube which don’t have these problems.

                          3. 2

                            Feels like a Chromebook would have a lot of the same advantages?

                            1. 2

                              What do you mean by “predictable” here? In my experience most major Linux distributions care far more about backwards compatibility between releases than OpenBSD does.

                              1. 1

                                Might pretend on the distro. Ubuntu is annoying about changes that break stuff or needlessly force me to learn new way to do old thing.

                            2. 9

                              OpenBSD feels to me similar to how Linux felt 10 years ago: precisely aimed at me. Now it feels like the ‘powers that be’ in the Linux community are only interested in targeting mobile devices and turning GNOME into macOS’s awful UI design of not letting you do anything that they didn’t think of beforehand.

                              1. 4

                                Why not run Gentoo or NixOS? Both give you as many configuration options as you require and neither sacrifice any speed? If you are security conscious I believe Gentoo still runs the “hardened” sources.

                                1. 2

                                  My concerns have nothing to do with security or configuration. I currently run Gentoo.

                            1. 2

                              Every year I tell myself “this is the year I finally do my NaNoGenMo idea” and every year I decide against it. I’m not gonna let another year slip by like this, so:

                              If I don’t have a complete NaNoGenMo done by November 30, I’m donating $100 to the EFF.

                              Y’all hold me accountable plz

                              1. 1

                                Could backfire; now all fans of the EFF have a monetary incentive to sabotage your project!

                              1. 3

                                Oddly enough, the lobster emoji is also used by fans of Jordan Peterson (a right-wing psychologist), also due to extrapolating something about a lobster’s biology to human societies. In their case, they believe something about lobster social interaction shows that social hierarchies are natural. I have no idea how true any of these claims about lobsters are, but seems like the poor creature is getting some overloaded political symbolism.

                                1. 4

                                  Hmff. “Static garbage collection” is a contradiction in terms. Garbage collection is dynamic. What the author is referring to is actually (the badly named) RAII, “resource acquisition is initialisation”, an idiom that has been an essential part of C++ since its inception and possibly pre-dates even that. (Yes, Rust adds a borrow checker, but this does not facilitate memory management, just prevents certain classes of logic error).

                                  Edit: let me elaborate a little. Rust’s borrow checker, though it performs an analysis that has similarities with escape analysis, is nothing to do with memory management - it checks that the program doesn’t violate certain rules, with the goal of memory safety. The actual lifetime of objects in Rust is determined by their scope, or (for heap allocated objects) via essentially the same RAII paradigm that is found in C++ - that is, the lifetime of Rust objects is not affected by the borrow checker. Rust memory management is actually very similar to that of C++.

                                  So, if you want to claim that “Rust has a static garbage collector”, you are really also claiming that C++ has a static garbage collector. I suspect people would be much more reluctant to accept that claim.

                                  I also suspect people are thinking, “but - Rust is memory safe! Like a garbage collected language!” and that is wrong, not because Rust isn’t memory safe, but because garbage collection does not actually imply memory safety - It is perfectly possible to have a language which uses garbage collection but which is no safer than C or C++, and in fact, it is possible to use a garbage collector in C or C++.

                                  So, the only way that “Rust has a static garbage collector” holds up is if you accept that “C++ also has a static garbage collector”. If you’re willing to take up that argument, well, I’m not willing to die on this particular hill; I’ll let you have your “static garbage collector” and wish you all the best with it.

                                  (Note that one fundamental difference between typical garbage collected systems and RAII-based systems is that GC’d languages typically don’t define the lifetime of objects by their scope. That is, it is perfectly possible for an object to remain alive, and referenced, once its scope has ended. In Rust, this isn’t true, and the compiler ensures it. This is another reason why I think “Rust has a gc” is a somewhat flawed claim).

                                  I don’t buy the arguments below that the article is tongue-in-cheek or just making the claim for illustrative purposes (and even if it was the latter, the illustration would be quite flawed, for reasons I’ve outlined above). Why not? Because it reads as a reasoned piece concluding reasonably firmly that the memory management of Rust is “proving [sic] out a new niche in the concept of garbage collection”.

                                  That’s my piece said. I wouldn’t have bothered but I know I’m getting downvotes and it seems likely that people didn’t really understand what I was saying, so here it is. Or maybe they just disagree - that’s fine; as my Grandad used to say, “you have the right to disagree, even if you are wrong”. :-)

                                  1. 5

                                    Hm, “garbage collection” is when the city pulls a truck up to my apartment every Monday to whisk away garbage I’ve used.

                                    It turns out that actual garbage trucks arrive on a schedule which one could say is statically determined. In some cities, you’re not allowed to put out trash unless it’s within 24 hours of garbage collection. Rust’s memory management maps on to actual real-world collecting-of-garbage a bit better than what we usually think of as GC.

                                    What I took away from (reading) the article is that there is middle-ground between “manual memory management” and “dynamic garbage collection.” In (safe) rust, you don’t manually allocate or free memory. You obey a set of rules for “owning” and “borrowing” data (one could say that the borrow checker is part of the type system…), and the compiled machine code includes code for allocating and freeing data. A program to allocate and free data at runtime sounds a lot like a garbage collector to me!

                                    1. 2

                                      Hm, “garbage collection” is when the city pulls a truck up to my apartment every Monday to whisk away garbage I’ve used.

                                      Reminds me of a friend who had to explain to relatives about how his PhD was on “garbage collection” :)

                                      In (safe) rust, you don’t manually allocate or free memory. You obey a set of rules for “owning” and “borrowing” data (one could say that the borrow checker is part of the type system…), and the compiled machine code includes code for allocating and freeing data. A program to allocate and free data at runtime sounds a lot like a garbage collector to me!

                                      So C++ also has a static garbage collector, then?

                                      (and yeah, I know that since C++11 it can theoretically have an actual dynamic garbage collector. But forget that for the moment).

                                      Because: there’s fundamentally no difference between how Rust manages memory and how C++ does (ignoring heap allocations via new and delete, which are generally considered non-idiomatic these days anyway).

                                      The borrow checker provides safety. It’s making sure you don’t get certain run-time errors. But it’s not affecting allocation or deallocation in any significant way. And static-vs-dynamic aside, there are some fundamental differences between how GCd languages typically handle object lifetimes compared to how Rust does. And that’s my real point: “garbage collection” actually has an established meaning that is far beyond just “allocates and frees data at runtime” (unconstrained dynamic liveness analysis is a big part of it).

                                      1. 1

                                        Because: there’s fundamentally no difference between how Rust manages memory and how C++ does (ignoring heap allocations via new and delete, which are generally considered non-idiomatic these days anyway).

                                        It’s a bit remarkable how, on the one hand, you’re derailing the comments on this blog post to insist on a trivial and nearly useless distinction-without-a-difference between “Garbage Collection” and “Automated Memory Management Which Does Not Involve a Runtime Tracking Component” which you feel the title doesn’t respect, and yet on the other hand wildly handwaving away the profound differences between a language that ensures that all heap allocations are deallocated once they cease to be alive and a language that absolutely does not do anything of the sort.

                                        Pick a lane, my friend. Are we being pedantic about the minutiae memory management, or are we throwing caution to the wind and equating distinctly different things based on broad similarities of concept?

                                        1. 0

                                          Pick a lane, my friend

                                          How about you stop being patronising?

                                          you’re derailing the comments on this blog post

                                          What, you mean I’m disagreeing with what other people are saying? What a terrible person I must be. Oh but wait, mine is the comment at the top of the chain; it was the people who disagreed with it that are “derailing” my comment, surely?

                                          to insist on a trivial and nearly useless distinction-without-a-difference between “Garbage Collection” and “Automated Memory Management Which Does Not Involve a Runtime Tracking Component”

                                          If the fundamental differences between garbage collection and other automatic memory management techniques are taken away, what’s left? Why even bother giving it a name, in that case? Let’s just say everything has garbage collection, that would be easier?

                                          It is totally wrong that the distinction is useless. Garbage collection has performance / heap size implications that are very significant, and for a “pure GC” language usually has implications on how lifetime is managed (as I’ve discussed in other posts). Most of all it implies that memory reclamation works via a particular technique with certain characteristics. I want “this language over here uses garbage collection” to retain some useful meaning.

                                          on the other hand wildly handwaving away the profound differences between a language that ensures that all heap allocations are deallocated once they cease to be alive and a language that absolutely does not do anything of the sort.

                                          It’s perfectly possible to have both GC-managed memory and non-GC-managed memory in one program. So the fact that you can do explicit heap allocations and deallocations - as well as RAII - in C++ just isn’t signficant to this discussion. If I make use of an unsafe crate in Rust which allows unchecked memory allocation and deallocation, does that now mean Rust no longer has a “static garbage collector”?

                                    2. 10

                                      The post attempts to examine what these terms really mean in practice and break down artificial silos of definition. It seems like this comment disregards the entire premise.

                                      Escape analysis is to the borrow checker, as “dynamic garbage collection” is to “static garbage collection”, AKA RAII.

                                      1. 6

                                        Escape analysis is to the borrow checker, as “dynamic garbage collection” is to “static garbage collection”, AKA RAII.

                                        But it’s not. From the article:

                                        instead of saying “oh, this escapes, so allocate it on the heap”, the borrow checker says “oh, this escapes, so error at compile time.”

                                        That’s key here; as I said, the borrow checker doesn’t facilitate memory management - it just prevents certain code structures (and in doing so prevents certain classes of errors).

                                        I don’t think the “artificial silos” of definition are artificial at all. Garbage collection has a clear meaning in the literature and RAII/scoped lifetime is clearly not it.

                                        1. 9

                                          Again, the entire disagreement just boils down to “garbage collection always means dynamic garbage collection”, which the original post disagrees with. What literature are you referring to?

                                          1. 6

                                            The past few decades of discussion around memory management in languages like, say, C++ versus Java, has contrasted “explicit memory management” using RAII, with “garbage collection” or “automatic memory management”, using a dynamic collector. You could argue that this dichotomy has always been incorrect. You could alternately argue that it was correct for C++/Java, but significantly different for rust. This post seems to hint in a 1-paragraph aside that its author believes the terminology has always been incorrect. But it doesn’t elaborate. If the argument is that people discussing C++ style memory management (in academia, industry, anywhere) have been using terms wrongly for decades, I would expect more digging into why that is, versus just flatly saying they were using the terms wrong.

                                            1. 6

                                              This article is pretty clearly not waging a war on the established meanings of terms.

                                              Taking some creative license with an article title is not the same as mounting an attack on terms with established meanings for decades.

                                              1. 5

                                                It’s not just the title, though:

                                                Similar to the dichotomy between static and dynamic typing, I think that Rust is proving out a new niche in the concept of garbage collection. That is, historically, we think of GC as something dynamic, that is, it involves a run-time component that does stuff at runtime. However, the idea of automatic memory management doesn’t inherently mean that it has to execute at runtime.

                                                The problem is that it’s conflating “memory management where deallocation does not require an additional construct in the program” with “automatic memory management”, a well-established term which refers to dynamic determination of liveness and subsequent clean-up of non-live objects. Yes, the first is memory management which is automatic, but it’s not what is meant by “automatic memory management” (or especially “garbage collection”).

                                            2. 4

                                              What literature are you referring to?

                                              Basically all literature on garbage collection refers to it as a dynamic determination of liveness. Even the wikipedia page (https://en.wikipedia.org/wiki/Garbage_collection_(computer_science)) distinguishes “garbage collection” from “stack allocation” and “region inference”, the latter two of which arguably are what Rust actually provides.

                                              (One of the experts in GC is Richard Jones, who as it happens I know personally. He maintains a page with links to some literature, here: https://www.cs.kent.ac.uk/people/staff/rej/gc.html).

                                              If the original post disagrees with that, then I disagree with the original post. The difference is that there are years of literature which use the term to mean what I think it means…

                                        2. 3

                                          I’m pretty sure “static garbage collection” is being used in a tongue-in-cheek manner.

                                          1. 1

                                            I don’t think it is. From the article:

                                            However, the idea of automatic memory management doesn’t inherently mean that it has to execute at runtime.

                                            The argument is that Rust decides lifetimes statically, but there is no explicit manual deallocation necessary, so it’s still “automatic” memory management, where “automatic memory management” is synonymous with “garbage collection”. I take issue because “automatic memory management” (and “garbage collection”) has come to mean, over several decades prior, something more than just “does not require explicit deallocation by the program”.

                                            If you took the argument to its logical conclusion, even C has automatic memory management (for local variables): if you declare a local variable, it gets space allocated on the stack when the function is entered, and the space is freed when the function returns - “automatically”.

                                            1. 5

                                              here “automatic memory management” is synonymous with “garbage collection”

                                              I don’t think I’ve ever heard anyone say this rigidly. I’ve certainly heard folks talk about garbage collection as a form of automatic memory management, and I’ve heard the terms used interchangeably. But including region analysis in the umbrella of automatic memory management seems more than fair. See: http://www.memorymanagement.org/glossary/a.html#term-automatic-memory-management (Interestingly, that glossary draws a distinction between “automatic memory management” and “automatic storage duration,” which corresponds to what C does for stack allocation.)

                                              It’s kind of like how in some circles, “garbage collection” has become synonymous with “tracing garbage collection,” even though there are more interesting particulars to tease out of those definitions. Usually the context makes it clear which is implied, but we Internet folk still can’t resist the urge to get into a good ol’ fashioned war over over definitions, which typically results in nothing but a giant waste of time.

                                              Maybe it would be better to just use “region based memory management” to describe Rust. It is undoubtedly a more precise term. But I don’t think “automatic memory management” is outright wrong, certainly not to the extent that you’ve expressed.

                                              1. 2

                                                here “automatic memory management” is synonymous with “garbage collection”

                                                I don’t think I’ve ever heard anyone say this rigidly

                                                I meant “here” as in “in the context of the above quote from the article”. Because, from the article, emphasis added:

                                                Historically, there has been two major forms of GC: reference counting, and tracing. The argument happens because, with the rise of tracing garbage collectors in many popular programming languages, for many programmers, “garbage collection” is synonymous with “tracing garbage collection.” For this reason, I’ve been coming around to the term “automatic memory management”, as it doesn’t carry this baggage.

                                                That is equating “GC” with “automatic memory management”. Whether they are generally considered to be exactly the same thing is not really relevant to the point I was making, which was that “garbage collection” has an established meaning which doesn’t match the usage in the article.

                                                But I don’t think “automatic memory management” is outright wrong, certainly not to the extent that you’ve expressed.

                                                I think it is outright wrong if you take “automatic memory management” to mean “garbage collection”, which the article does, and if you accept the established meaning for “garbage collection”.

                                                Edit: I’d be happier if it said “Rust has a form of automatic memory management” and never said anything about garbage collection. But then, it wouldn’t be anything special to Rust. The point that it almost (and probably should’ve) makes instead is that Rust, compared to other langeuages (such as C++) which give you the same form of “automatic memory management”, also gives you one of the main benefits also found in many garbage collected languages: memory safety. But that would hardly be news.

                                                1. 3

                                                  That is equating “GC” with “automatic memory management”.

                                                  Uh, really? That’s now how I read it. That there exists a distinction is pretty clear if you keep reading:

                                                  Similar to the dichotomy between static and dynamic typing, I think that Rust is proving out a new niche in the concept of garbage collection. That is, historically, we think of GC as something dynamic, that is, it involves a run-time component that does stuff at runtime. However, the idea of automatic memory management doesn’t inherently mean that it has to execute at runtime.

                                                  So I’m not really sure what you’re getting at here… To be frank, when I first saw your top comment, I thought you just read the title, reacted negatively to it, and then commented. (I can’t stand those types of comments, which is why I commented in the first place. But I’ve failed spectacularly at that approach; I didn’t anticipate you digging in your heels.) Because the article never actually talks about what “static garbage collection” even is. It’s just a title with some creative license. Pointing out its “contradictory” nature is completely missing the point.

                                                  Like, can you imagine? Let’s say I’m talking with someone who isn’t familiar with Rust. I say, “yeah it’s like having a static garbage collector.” But then here’s @davmac, “WELL ACTUALLY, that’s a contradiction.” “Well… yeah, I’m just using the term for illustrative purposes.” See? No big deal. We aren’t upending established terminology. Just explaining concepts.

                                                  And hey, yeah, maybe terms change meaning over time. It happens. Try telling someone the definition of “regular expression” and how \b(\w+)\1\b is totally not a regex for finding repetitious words. ;-)

                                                  1. 2

                                                    Uh, really?

                                                    Uh, yeah? No need to get nasty here.

                                                    That there exists a distinction is pretty clear if you keep reading

                                                    I honestly don’t see the paragraph you quote as making any distinction between AMM and GC. In my eyes, it’s clearly using them synonomously.

                                                    So I’m not really sure what you’re getting at here…

                                                    I think I’ve made that clear, frankly.

                                                    Because the article never actually talks about what “static garbage collection” even is

                                                    It does, in the paragraph you yourself quoted. Where it says “Rust is proving out a new niche in the concept of garbage collection” it’s distinctly claiming that what Rust provides is a form of garbage collection (also indicated by the title). Where it says that “the idea of automatic memory management doesn’t inherently mean that it has to execute at runtime” it is making a static-vs-dynamic distinction. It is saying that “the form of memory management that Rust provides is static garbage collection” - both directly in the title, and more elaborately in that paragraph.

                                          2. 2

                                            All of this is addressed in the article, and is pretty much beside the point.

                                          1. 22

                                            In Norway, there is a law about Reklamasjonsrett where the place that sells you something has to offer a repair (typically through some deal with the producer) within two or five years (depending on how long the thing is expected to last, in general a court may decide this). If they don’t manage to repair it within a few tries, you have the right to get a new one.

                                            The five year group includes stuff like dish washers, but court cases have also decided that e.g. cell phones may be “reklamert” for up to five years, same goes for VCR’s (an IR sensor failing after 3.5 years led to a case on that). I suspect high-end headphones would fall under the same category. However, buying it from “an ebay vendor” would put one in a worse position. There are Norwegian shops selling Jaybird headphones though …

                                            Warranty time offered by seller/producer does not affect the interpretation of reklamasjonsrett (they are completely independent), and it’s enough that the product is only partially failing.

                                            1. 6

                                              The UK has something like this as well, though as you might imagine all the relevant details differ. Goods must last a time ‘reasonable’ to the type of good, which unfortunately isn’t clearly defined for almost anything, leaving it up to courts to decide. The exception is that if a product breaks within the first six months after purchase, the burden of proof is on the seller to show that it wasn’t their fault, excepting some items obviously not intended to be durable. So most reputable UK-based sellers will repair/refund/replace in the first six months unless you obviously damaged the product yourself. In theory, claims can be made up to six years, but past the first six months, the burden of proof is on the customer to argue that the product was faulty and failed to last a reasonable time, and they have to take the seller to court to enforce the claim if the seller rejects it, which is pretty rare.

                                              1. 4

                                                For those interested, reklamasjonsrett translates to “reclamation right” in English. “Reclaiming” a product, unless I’m mistaken, means returning it and getting a new one.

                                                1. 4

                                                  Yeah, I was a bit scared of translating a legal term … the Wikipedia page’s English link goes to https://en.wikipedia.org/wiki/Consumer_complaint which wasn’t too helpful

                                                  1. 2

                                                    Yes, reklamasjon seems to have a special meaning in (some of the?) the Nordic countries, but I thought it would still be interesting to know what the word means literally.

                                                    1. 2

                                                      From Swedish Wikipedia:

                                                      Ordet reklamation härstammar från latinets reclamo och betyder “att ropa mot” eller “att protestera mot”.

                                                      Rough translation:

                                                      The word is from the Latin reclamo and means to “to call against”, or to “protest against”.

                                                2. 2

                                                  This is true in most countries I believe. New Zealand has a similar law: the Consumer Guarantees Act. As far as I know there’s not much in the way of well-established timeframes like 2-5 years, it’s whatever is considered a reasonable timeframe by a hypothetical reasonable person, typical kind of common law stuff.

                                                  And similarly, nothing at all to do with the ‘warranties’ offered by people selling stuff. Retailer warranties aren’t worth the paper they’re written on.

                                                  In New Zealand if the product is faulty (partially or wholly, doesn’t matter) then you can take it back and if it’s reasonable to do so they can replace it or repair it or give you a refund, their choice. But if they repair or replace it and it is faulty again you can choose to get a refund.

                                                1. 3

                                                  During the daytime, I like to work in pubs (e.g. 1, 2). The nicer ones usually make a good latte, and around here they tend not to be busy during the day (only in the evening), so you can use them like a quieter and differently decorated version of a coffee shop if it’s before about 16:30. Also, for some reason, the pubs around here have better views and ambience than the coffee shops do.

                                                  1. 2

                                                    This book contains details of all 245 deep-space probe launches and attempted launches between 1958 and 2016, with information mostly verified in primary sources. Not really something to read through cover-to-cover (at least for me), but interesting to browse and as a reference. Also, it’s free.

                                                    1. 3

                                                      Two of the authors also have a survey-paper preprint up on arXiv, Survey and Taxonomy of Lossless Graph Compression and Space-Efficient Graph Representations.

                                                      edit: Although the survey doesn’t cover or cite their own Log(Graph) approach. Must’ve been written before they did their own work.

                                                      1. 1

                                                        Thanks! That graph on 1:4 is packed with info, too.

                                                      1. 3

                                                        Is OCaml making something of a comeback, or is this some Baader-Meinhof stuff? I just started working with it a bit to do a new plugin for LiquidSoap, and suddenly it seems it’s all over my feeds.

                                                        1. 5

                                                          I’m not sure, but I think ReasonML might be raising a bit of interest and/or awareness. It certainly has for me - ReasonML and ReasonReact are are about at the top of my new-things-to-try list.

                                                          1. 3

                                                            i think even before that, ocaml has been making a steady if gradual comeback over the last few years. opam, for instance, has been a pretty big boost for it (never underestimate the value of a good package manager in growing an ecosystem!), jane street’s dune is really exciting snce build systems have always been a bit of a weak spot, and more recenrly, bucklescript has been attracting a lot of attention among the webdev crowd even before reason came along (and now, of course, reason/bucklescript integration is a pretty big thing)

                                                            1. 1

                                                              Sadly I’ve never been able to figure out Opam… it seems to mutate the global package environment every time you want to build something, just like Cabal does (although they are fixing it with the cabal new-* commands). This is a massive pain if you want to work on different projects at once, and makes it hard to remember what state your build is in. Wish they would learn a bit from Cargo and Yarn and make this it more user friendly - so much good stuff in OCaml and Coq that I’d love to play around with, but it’s wrapped up in a painful user experience. :(

                                                              1. 3

                                                                are you using opam to develop projects foo and bar simultaneously, and installing foo’s git repo via opam so that bar picks it up? that is indeed a pain; i asked about it on the ocaml mailing list once and was recommended to use jbuilder (now dune) instead, which works nicely.

                                                                1. 1

                                                                  just came across this, might be useful: https://esy.sh/

                                                            2. 4

                                                              OCaml is such a well designed programming language with such wide reaching influence that I hope people continually rediscover it does make a comeback.

                                                              1. 3

                                                                Andreas Baader and Ulrike Meinhof co-founded a left-wing terror group 1970 in Germany that killed over 30 people. I can’t see a connection here but maybe you could elaborate your comment.

                                                                1. 6

                                                                  It’s a synonym for the “frequency illusion” in some circles, the illusion of something being more common recently when it’s just that you started noticing it more recently.

                                                                  1. 2

                                                                    I don’t get the connection.

                                                                    1. 3

                                                                      It’s like the Streisand Effect or the Mandela Effect: named because of some phenomenon around an event that an onlooker noticed and popularized, not because of any connection to the person themselves.

                                                                      https://science.howstuffworks.com/life/inside-the-mind/human-brain/baader-meinhof-phenomenon.htm

                                                                      “Now if you’ve done a cursory search for Baader-Meinhof, you might be a little confused, because the phenomenon isn’t named for the linguist that researched it, or anything sensible like that. Instead, it’s named for a militant West German terrorist group, active in the 1970s. The St. Paul Minnesota Pioneer Press online commenting board was the unlikely source of the name. In 1994, a commenter dubbed the frequency illusion “the Baader-Meinhof phenomenon” after randomly hearing two references to Baader-Meinhof within 24 hours. The phenomenon has nothing to do with the gang, in other words. But don’t be surprised if the name starts popping up everywhere you turn [sources: BBC, Pacific Standard].”

                                                                2. 2

                                                                  I also seem to see it more often recently. Either Baader-Meinhof too (umm; or “reverse Baader-Meinhof”? seems I’m getting pulled in thanks to recent exposition?), or I have a slight suspicion that ReasonML may have contributed to some increase in OCaml public awareness. But maybe also Elm, and maybe F# too?

                                                                1. 2

                                                                  One additional incremental step you could take from here is to set N adaptively based on how much data is available in a given context, rather than choosing a fixed N. The reason you end up with verbatim strings of input text repeated if you set N too high is that the input data set might literally have only one example of that particular sequence of, say, 4 words; so you reduce to N=3 or N=2. But other sequences of words might be very common, with the 4-gram distribution better modeled in those contexts, so N=4 would be safe there. This kind of adaption is called a “back-off model”, starting with a higher N but backing off to lower N in contexts with sparser data. The Katz back-off model is one of the better known.

                                                                  You can also treat this issue as a smoothing problem, which deals with the additional issue that not all sequences you haven’t seen in the input should literally have zero likelihood; some really wouldn’t exist even in a larger dataset, but others are missing just due to the finite size of a dataset. There are many such models (studied as early as Turing), summarized in this paper (see also these tutorial slides).

                                                                  1. 3

                                                                    So to what extent is “probabilistic programming” a new/different programming paradigm and to what extent is it something like a DSL for setting-up various probabilistic algorithms?

                                                                    Not to imply I’d dismiss it if it was the latter.

                                                                    1. 2

                                                                      People have approached it from both sides, so some systems have a flavor more like one or the other.

                                                                      A very simplified story which you could poke holes in, but which I think covers some part of the history is: The earliest systems (from where the name came) thought of themselves, I believe, as adding a few probablistic operators to a “regular” programming language. So you mostly programmed “normally”, but wherever you lacked a principled way to make a decision, you could use these new probabilistic operators to leave some decisions to a system that would automagically fill them in. If you want to analyze what the resulting behavior is though, it’s somewhat natural to view the entire program as a complicated model class, and the whole thing therefore as a way of specifying probabilistic models. In which case, if you want the system to have well-understood behavior, and especially if you want efficient inference, there’s a tendency towards wanting to constrain the outer language more, ending up with something that really looks more like a DSL for specifying models. Lots of possible places to pick in that design space…

                                                                      1. 2

                                                                        I think it’s useful to think about it as something like logic programming. Logic programming is useful when the answer you want can be posed as the solution to a series of logical constraints. Probabilistic programming shines your problem can be posed as the expectation of some conditional probability distribution. Problems that benefit from that framing are particularly well-suited for probabilistic programming.

                                                                        I think it’s use in practice will like SQL or Datalog resemble using a DSL as you don’t need probabilistic inference everywhere in your software, but in principle as it is just an extension of deterministic programming it does not need to restricted in this way.

                                                                      1. 6

                                                                        Results are interesting. Of course this is only in one class in one university, with four intro programming problems studied. But in that context, the summary of their findings is that students’ initial preference for how to solve a problem was almost always iteration. This varied somewhat depending on the problem but was fairly strong, ranging from 66% to 100% preferring an iterative solution (0-34% preferring a recursive solution). However, this preference didn’t seem to predict success: When assigned to implement both iterative and recursive solutions, generally a larger proportion of the class was able to produce a correct recursive solution than a correct iterative one.

                                                                        1. 1

                                                                          Excellent summary. My thoughts on this was it’s a nice model to rerun in a bunch of places on a bunch of problems newcomers can handle. When the big picture emerges, we’ll see which is really more intuitive or useful in such situations.

                                                                        1. 2

                                                                          Can you say what some of the main conclusions are? Or is this primarily a survey? E.g.,

                                                                          this article identifies game facet orchestration as the central challenge for AI-based game generation

                                                                          In particular, we identify the different creative facets of games

                                                                          Are these the six facets: audio, visuals, levels, rules, narrative and gameplay? But the intro also suggests that only music (audio?) will be looked at. Or maybe its only meant as linguistic metaphor.

                                                                          we propose how orchestration can be facilitated in a top-down or bottom-up fashion,

                                                                          How?

                                                                          we conclude by discussing the open questions and challenges ahead.

                                                                          I’m guessing this is

                                                                          • Combinatorial Explosion
                                                                          • Learning the Mapping Between Facets
                                                                          • Orchestration with Human Designers
                                                                          • Evaluating Orchestration

                                                                          envisioned several high-level orchestration processes along a spectrum between purely hierarchical top-down generation and organic bottom-up generation.

                                                                          What are some examples of these processes?

                                                                          (I’m thinking this is just my unfamiliarity with the topic but the last two sentences of the abstract are saying almost the same thing.

                                                                          I wish abstracts in general gave more information about their conclusion like key pieces of information the authors would have liked to know before starting the project. Stuff that would have helped speed things up a lot.

                                                                          1. 4

                                                                            I’d describe it as a forward-looking survey. The starting point is that there’s a large existing body of work on procedural content generation, but they’re systems that generate one kind of thing at a time, like level generators, pixel-art makers, or procedural audio. Can you get to full-game generation by just plugging these kinds of generators together in some way? We discuss different architectures for orchestrating these kinds of generators: top-down, bottom-up, pipelined, etc., and survey existing systems that have already done some version of that.

                                                                            The six facets we propose are ones you mentioned, yeah. There are many other ways you could slice game design, so this isn’t necessarily the ultimate metaphysical truth about games, that they’re made of exactly six kinds of stuff. But it’s based on looking at what kinds of things existing procedural generators currently generate (level generators are probably the most common, but there’s work in the other five too).

                                                                            Yeah, the “orchestrate”, “jam”, etc. terminology is just a musical metaphor. We don’t only focus on game music here, but we use analogies like top down orchestra-style scoring/conducting, where every element of the overall system is given its centrally written part, vs. bottom-up jazz improvisation where they coordinate less hierarchically, etc. I can see how that can get confusing, sorry.

                                                                            The survey part of the paper is in the case study section, where we give an overview of nine existing systems that generate more than one kind of thing in tandem (e.g. rules, levels, and music). We translate what each of them does to our language of 6 facets and different orchestration strategies, to try to get an idea of how all the existing stuff relates to each other, and what it doesn’t do yet. The first author (A. Liapis) made some colorful triangle facet-orchestration diagrams for that section that I quite like. They summarize what each system is doing in a kind of pictogram language, showing which facets the system generates and how they interrelate (e.g. some systems have a pipeline, some scrape content off the web, some ask for user input at certain points, etc.).

                                                                            edit: I also wrote a short twitter thread earlier today with a more concise version of this explanation, for people who like twitter threads (I know, I know).

                                                                            1. 1

                                                                              Thanks! The figures in the survey are indeed really nice. (It wasn’t obvious before reading your comment that the arrows was the information about the orchestration process.)

                                                                          1. 7

                                                                            I would have rather seen the HardenedBSD code just get merged back into FreeBSD, I’m sure there are loads of reasons, but I’ve never managed to see them, their website doesn’t make that clear. I imagine it’s because of mostly non-technical reasons.

                                                                            That said, It’s great that HardenedBSD is now setup to live longer, and I hope it has a great future, as it sits in a niche that only OpenBSD really sits in, and it’s great to see some competition/diversity in this space!

                                                                            1. 13

                                                                              Originally, that’s what HardenedBSD was meant for: simply a place for Oliver and me to collaborate on our clean-room reimplementation of grsecurity to FreeBSD. All features were to be upstreamed. However, it took us two years in our attempt to upstream ASLR. That attempt failed and resulted in a lot of burnout with the upstreaming process.

                                                                              HardenedBSD still does attempt the upstreaming of a few things here and there, but usually more simplistic things: We contributed a lot to the new bectl jail command. We’ve hardened a couple aspects of bhyve, even giving it the ability to work in a jailed environment.

                                                                              The picture looks a bit different today. HardenedBSD now aims to give the FreeBSD community more choices. Given grsecurity’s/PaX’s inspiring history of pissing off exploit authors, HardenedBSD will continue to align itself with grsecurity where possible. We hope to perform a clean-room reimplementation of all publicly documented grsecurity features. And that’s only the start. :)

                                                                              edit[0]: grammar

                                                                              1. 6

                                                                                I’m sorry if this is a bad place to ask, but would you mind giving the pitch for using HardenedBSD over OpenBSD?

                                                                                1. 19

                                                                                  I view any OS as simply a tool. HardenedBSD’s goal isn’t to “win users over.” Rather, it’s to perform a clean-room reimplementation of grsecurity. By using HardenedBSD, you get all the amazing features of FreeBSD (ZFS, DTrace, Jails, bhyve, Capsicum, etc.) with state-of-the-art and robust exploit mitigations. We’re the only operating system that applies non-Cross-DSO CFI across the entire base operating system. We’re actively working on Cross-DSO CFI support.

                                                                                  I think OpenBSD is doing interesting things with regards to security research, but OpenBSD has fundamental paradigms may not be compatible with grsecurity’s. For example: by default, it’s not allowed to create an RWX memory mapping with mmap(2) on both HardenedBSD and OpenBSD. However, HardenedBSD takes this one step further: if a mapping has ever been writable, it can never be marked executable (and vice-versa).

                                                                                  On HardenedBSD:

                                                                                  void *mapping = mmap(NULL, getpagesize(), PROT_READ | PROT_WRITE | PROT_EXEC, ...); /* The mapping is created, but RW, not RWX. */
                                                                                  mprotect(mapping, getpagesize(), PROT_READ | PROT_EXEC); /* <- this will explicitly fail */
                                                                                  
                                                                                  munmap(mapping, getpagesize());
                                                                                  
                                                                                  mapping = mmap(NULL, getpagesize(), PROT_READ | PROT_EXEC, ...); /* <- Totally cool */
                                                                                  mprotect(mapping, getpagesize(), PROT_READ | PROT_WRITE); /* <- this will explicitly fail */
                                                                                  

                                                                                  It’s the protection around mprotect(2) that OpenBSD lacks. Theo’s disinclined to implement such a protection, because users will need to toggle a flag on a per-binary basis for those applications that violate the above example (web browsers like Firefox and Chromium being the most notable examples). OpenBSD implemented WX_NEEDED relatively recently, so my thought is that users could use the WX_NEEDED toggle to disable the extra mprotect restriction. But, not many OpenBSD folk like that idea. For more information on exactly how our implementation works, please look at the section in the HardenedBSD Handbook on our PaX NOEXEC implementation.

                                                                                  I cannot stress strongly enough that the above example wasn’t given to be argumentative. Rather, I wanted to give an example of diverging core beliefs. I have a lot of respect for the OpenBSD community.

                                                                                  Even though I’m the co-founder of HardenedBSD, I’m not going to say “everyone should use HardenedBSD exclusively!” Instead, use the right tool for the job. HardenedBSD fits 99% of the work I do. I have Win10 and Linux VMs for those few things not possible in HardenedBSD (or any of the BSDs).

                                                                                  1. 3

                                                                                    So how will JITs work on HardenedBSD? is the sequence:

                                                                                    mmap(PROT_WRITE);
                                                                                    // write data
                                                                                    mprotect(PROT_EXEC);
                                                                                    

                                                                                    allowed?

                                                                                    1. 5

                                                                                      By default, migrating a memory mapping from writable to executable is disallowed (and vice-versa).

                                                                                      HardenedBSD provides a utility that users can use to tell the OS “I’d like to disable exploit mitigation just for this particular application.” Take a look at the section I linked to in the comment above.

                                                                                  2. 9

                                                                                    Just to expound on the different philosophies approach, OpenBSD would never bring ZFS, Bluetooth, etc into the OS, something HardenedBSD does.

                                                                                    OpenBSD has a focus on minimalism, which is great from a maintainability and security perspective. Sometimes that means you miss out on things that could make your life easier. That said OpenBSD still has a lot going for it. I run both, depending on need.

                                                                                    If I remember right, just the ZFS sources by themselves are larger than the entire OpenBSD kernel sources, which gives ZFS a LOT of attack surface. That’s not to say ZFS isn’t awesome, it totally is, but if you don’t need ZFS for a particular compute job, not including it gives you a lot smaller surface for bad people to attack.

                                                                                    1. 5

                                                                                      If I remember right, just the ZFS sources by themselves are larger than the entire OpenBSD kernel sources, which gives ZFS a LOT of attack surface.

                                                                                      I would find a fork of HardenedBSD without ZFS (and perhaps DTrace) very interesting. :)

                                                                                      1. 3

                                                                                        Why fork? Just don’t load the kernel modules…

                                                                                        1. 4

                                                                                          There have been quite a number of changes to the kernel to accommodate ZFS. It’d be interesting to see if the kernel can be made to be more simple when ZFS is fully removed.

                                                                                          1. 1

                                                                                            You may want to take a look at dragonflybsd then.

                                                                                      2. 4

                                                                                        Besides being large, I think what makes me slightly wary of ZFS is that it also has a large interface with the rest of the system, and was originally developed in tandem with Solaris/Illumos design and data structures. So any OS that diverges from Solaris in big or small ways requires some porting or abstraction layer, which can result in bugs even when the original code was correct. Here’s a good writeup of such an issue from ZFS-On-Linux.

                                                                                1. 3

                                                                                  More or less the same things as last week still: paper/code-reading for some kind of ML-related research project with GANs, fiddling with my software environment, and preparing some courses. Looking for suggestions for programming-languages textbooks or course materials if anyone has opinions about things that work particularly well (or badly). I won’t copy/paste the course’s requirements/constraints here, but see the last bullet point of the link above.

                                                                                  Besides that, people seem to have bought into my preference to reorganize our weekly research-group meetings at work into a monthly group meeting, with research seminars covering one specific topic/project in the in-between weeks. So I’m putting that together. We’ve grown to about 12 people in the group and so learning about what everyone is doing by just briefly weekly updates doesn’t really scale. It also provides a more official place for visitors to give us some kind of intro to what they do, which was previously mainly through chatting over lunch (not always a bad way of doing it, but sometimes something more organized with a proejctor is useful).

                                                                                  1. 2

                                                                                    The one on Debian gives an odd mix of stuff. Here’s today:

                                                                                    Sep 13 Walter Reed born, 1851

                                                                                    Sep 13 58 °C (136.4 °F) measured at el Azizia, Libya, 1922

                                                                                    Sep 13 British defeat the French at the Plains of Abraham, just outside the walls of Quebec City, 1759

                                                                                    Sep 13 Building of Hadrian’s Wall begun, 122

                                                                                    Sep 13 Chiang Kai-Shek becomes president of China, 1943

                                                                                    Sep 13 Barry Day commemorates the death of Commodore John Barry, USA

                                                                                    Sep 13 Bonne fête aux Aimé !

                                                                                    Sep 13 Kornél

                                                                                    Sep 13 День российской печати

                                                                                    In addition to being a strange selection, some of the dates seem to be wrong. The Russian Wikipedia article on День российской печати (“Day of the Russian Press”) says it’s celebrated on January 13.

                                                                                    edit: That last one appears to come out of /usr/share/calendar/ru_RU/calendar.common, which gives its date, correctly, as 13 янв. (13 Jan). So some date/locale conversion must be screwing up somewhere.

                                                                                      1. 1

                                                                                        My install (Ubuntu 16.04.3 LTS) correctly identifies День Волха Змеевича (day of Volha Zmeevich) as 14 Sep. The Russian Press day is not included. It is included on Jan 13 though:

                                                                                        $ calendar -t 0113 | grep 'День'
                                                                                        Jan 13  День российской печати
                                                                                        
                                                                                        1. 2

                                                                                          I checked on an Ubuntu install and got the same behavior as you. On my Debian install though (current ‘testing’ distribution), Russian Press Day seems to show up on the 13th day of every month, even though it’s correctly only on Jan 13 in the source file.

                                                                                      1. 4

                                                                                        Quoting from the6threplicant on HN:

                                                                                        A comment from someone who has actually read the article: https://np.reddit.com/r/ukpolitics/…

                                                                                        For example:

                                                                                        …the EFF argue that the idea of what constitutes a link is not fully defined. I’m not sure what they’re talking about. Recitals 31-36 set out the concepts in article 11, fairly clearly. They make it clear that what is being protected is substantial or harmful copying of significant portions of the text. They also make it clear what organisations this will affect - press organisations - with a fairly clear description of what a press organisation might constitute. (FWIW, memes are not covered, and anyone you hear talking about “banning memes” is getting their news from very poor sources.)

                                                                                        Personally, I haven’t (yet?) read the article, so not sure who’s the “discussion winner” here, but I’m sure happy to see this opposing voice — my curiosity got now piqued enough that I’m hoping to print it and take for a slow relaxed read on a couch, with a pen ready for scribbling over the paper.

                                                                                        1. 1

                                                                                          More or less the same text is repeated in another reddit thread, under a different account name: https://dm.reddit.com/r/ukpolitics/comments/9f6ou3/article_13_has_been_passed/e5uh7ka/ I’m not sure what to make of that fact, but I’m dubious of the claims (of the EFF detractor) at first blush at least.

                                                                                          1. 1

                                                                                            A reply to it by someone “working on it for EFF”. An especially important point seems to be that “recitals” in the directive are reportedly “not law” (i.e. not binding, I assume?)

                                                                                            1. 1

                                                                                              Interesting. It’d be nice to get a medium-length readable summary of the directive for laypeople. I generally like what the EFF does, but even in cases where I mostly agree with their take on things, they do tend to have very simple, call-to-action style summaries of what a given policy does. (Understandably so of course, because they’re a pressure group, and a whitepaper isn’t a very effective way of mobilizing anyone.)