1. 7

    Can someone explain to me, a Rust tourist at best, why async/await are desirable in Rust when awesome sauce concurrency because of the ownership / borrowing model have been baked into Rust since its inception?

    FWIW I also really like the idea of working groups, and I think focusing on the areas where Rust gets the widest usage is super smart.

    1. 15

      The current Futures implementation exerts a lot of “rightward pressure” when you’re trying to chain multiple future results together. It works, and works safely, but it’s a bit messy to work with and there’s a lot of nesting to deal with, which isn’t easily readable.

      The async/await proposal is basically syntactic sugar to linearize logic like that into a straight-line set of reasoning that’s a lot easier to work with.

      1. 15

        The biggest problem with the current Futures, as far as my experience goes, is that the method-chaining style involves so much type inference that if you screw up a type somewhere the compiler has no prayer of figuring out what you meant it to be, or even really where the problem is. So you have to keep everything in your head in long chains of futures. I’m expecting async/await to help with this just by actually breaking the chains down to individual expressions that can be type-checked individually.

        Edit: And it’s desirable in Rust because async I/O is almost always(?) going to be faster than blocking I/O, no matter whether it’s single threaded or multi-threaded. So it doesn’t necessarily have anything to do with threads, but rather is an orthogonal axis in the same problem space.

        1. 5

          I hope a lot of care is taken to make it easy to specify intermediate type signatures. I know that in other languages with type inference I’ll “assert” a signature halfway through some longer code mainly as docs but also to bisect type error issues.

          1. 1

            Totally agreed. As far as I understand (which is not much), saying async foo(); is similar to return foo(); in how the language treats it, so you should be able to get the compiler pointing quite specifically at that one line as the place the type mismatch occurs and what it is. If you have to do foo().and_then(bar).and_then(bop); then it just says “something went wrong in this expression, sorry, here’s the ten-generic-deep nested combinator that has an error somewhere”.

            1. 1

              Async is the easier part. async fn will be sugar:

              async fn async_fun() -> String {
                // something
              }
              
              fn async_fun() -> impl Future<Item=String> {
                // something
              }
              

              In the back, this builds a Generator. await is setting up the yield points of the generator.

              async fn async_fun() -> String {
                let future = futures::future::ok(String::from("hello, i'm not really asynchronous, but i need quick example!"));
                let string: String = await!(future);
                string
              }
              

              So yes, the type mismatch would occur at the binding of the await and the right hand side is much easier to grasp. Basically, “and_then” for chaining can now largely be replaced by “await”.

          2. 1

            Ah, you’re right. I SHOULD know this in fact from the bad old days of Java when “Non Blocking IO” came out :)

            1. 1

              This has pretty much been my only major negative with rust up to this point, i’ve got three apps underway in rust all using Futures and it just starts getting hairy when you get to a certain level of complexity, to the point you can be hammering out code and when you get to your Futures chaining it stops you dead in your tracks because it’s hard to read and hard to reason about quickly. So i’m on board with async/await reserves for sure.

            2. 3

              This sums it up very well. I can do everything I personally want to do with Futures as they exist now in Rust. That said, I feel like async/await will really clean things up when they land.

              1. 2

                That’s interesting! I guess I’d mostly thought of async/await as coming into play in languages like Python or Javascript where real concurrency wasn’t possible, but I suppose using them as a conceptual interface like this with real concurrency underneath makes a lot of sense too.

              2. 9

                I believe async/await are desirable in all languages that implement async I/O because the languages usually walk this path, motivated by code ergonomics:

                1. Async I/O and timing functions return immediately, and accept a function to call (“callback”) when they’re done. Code becomes a pile of deeply nested callbacks, resulting in the “ziggurat” or “callback hell” look.
                2. Futures (promises) are introduced to wrap the callback and success/failure result into an object that can be manipulated. Nested callbacks become chained calls to map/flatMap (sometimes called then).
                3. Generators/coroutines are introduced to allow a function to suspend itself when it’s waiting for more data. An event loop (“executor”, “engine”) allows generators to pause each time a future is pending, and resume when it’s ready.
                4. “async”/“await” keywords are added to simplify wiring up a promise-based generator.

                In rust’s case, I think it was “implement generators/coroutines” which hit snags with the existing borrow checker.

                There’s a cool and very in-depth series of articles about the difficulty of implementing async/await in rust starting here: https://boats.gitlab.io/blog/post/2018-01-25-async-i-self-referential-structs/ (I’m pretty sure this was posted to lobsters before, but search is still broken so I can’t find it.)

                1. 8

                  “async”/“await” keywords are added to simplify wiring up a promise-based generator.

                  Going further: it follows the very general algebraic pattern of monad. Haskell has “do-notation” syntax which works for Promises but also Maybe, Either, Parser, etc.

                2. 8

                  In addition to the great explanations of others, here are a couple diffs where the Fuchsia team at Google was able to really clean up some code by switching to async/await:

                  1. 1

                    Interesting! That speaks to the Rust 2018 initiative’s focus on ‘embedded’ in the mobile sense.

                    1. 3

                      The initiative has been surprisingly successful. Most of my clients are currently on embedded Linux and smaller.

                1. 18

                  Another fun one is when they mention a “dynamic” environment. This generally means that priorities will be constantly shifting and you’ll have trouble finishing a task before the work is re-prioritised and you have to start working on the next emergency.

                  1. 15

                    good old Running Around With Your Hair on Fire Driven Development

                  1. 2

                    What’s wrong with current’s implementation written in ANSI C? Seems like a waste of human resources, in my opinion.

                    1. 32

                      What’s wrong with current’s implementation written in ANSI C?

                      If you want to script a Go application, it’s a hell of a lot easier if the VM is native instead of trying to fight with C interop issues.

                      Seems like a waste of human resources, in my opinion.

                      Incredible that with zero insight into the business requirements or technical motivations behind a project, you still consider yourself qualified to judge the staffing allocations behind it

                      1. 7

                        Incredible that with zero insight into the business requirements or technical motivations behind a project, you still consider yourself qualified to judge the staffing allocations behind it.

                        You are definitely right about this.

                        If you want to script a Go application, it’s a hell of a lot easier if the VM is native instead of trying to fight with C interop issues.

                        As @tedu commented, I understand the motivation behind it, thanks for pointing it out too.

                        1. 2

                          They give some further explanation of the technical requirements that led them not to pick up Shopify’s version here, which I found intersting: https://github.com/Azure/golua/issues/36.

                          The TL;DR is they’ve been scripting Helm for a while using another Lua-in-Go engine, but it isn’t updating to 5.3, which they very much want, along with closer conformance to the Lua spec, plus they have some design ideas that they feel would make debugging etc easier.

                          1. 3

                            To each their own, but I’m a little perplexed that strict 5.3 conformance would be a driver. I’m using gopher-lua which is like 5.1 but not exactly, but it’s just fine. What I need is some scripting language to embed, it’s close enough to lua it’s not really learning a whole new whatever, and it interfaces nicely with go (nicer than a strict port of the stack api would).

                            I don’t know jack about helm, but after seven seconds of research I can’t reverse engineer a requirement for lua 5.3 accept no substitutes.

                            1. 1

                              It’s possible that they have some need of true 64-bit integers, which only came to Lua in 5.3. That’s the only important change I can think of.

                      2. 5

                        Writing memory-safe C is an artform and a craft.

                        Writing safe code with Go is something a trained orangutang can do.

                        1. 3

                          Lua bytecode is memory-unsafe in the reference implementation, for starters.

                          1. 2

                            Do you have a link or more details?

                            1. 1

                              My upload server was down, sorry I didn’t link this originally. Here’s a slide deck (pdf) of a presentation I gave at Houston Area Hackers with almost all the details you could want.

                              I still have to update this; there are quite a few more tricks you can pull. I ended up creating a pure-Lua dlsym() implementation for the project that spurred all this research.

                              1. 2

                                Hmmm… Do reimplementations like this one remove support for lightuserdata then? I’m having a hard time imagining a lua interpreter for arbitrary bytecode that supported lightuserdata but could nevertheless guarantee that those pointers were safe.

                                1. 1

                                  Yeah. In gopher-lua, userdata actually looks kind of like lightuserdata, which doesn’t exist. When both host and lua state share an allocator, the difference becomes less meaningful.

                                  1. 1

                                    Looks like it’s unimplemented. lightuserdata is a pretty damn leaky Lua API. Bytecode itself is even more leaky in the Lua C implementation - arbitrary Lua bytecode is as expressive as C. No surprise that MS seemingly wanted to make an API/ABI compatible implementation that fixed some unnecessary leakiness.

                            2. 2

                              It’s annoying to interface with from go code.

                              1. 1

                                seems like something that should be fixed in Go rather than reimplementing everything written in C.

                                1. 1

                                  For better or worse, it’s a result of deliberate design trade-offs in go, so it’s unlikely to change.

                            1. 6

                              I can’t read this article without somehow getting redirected to a “club” offering free Walmart gift cards.

                              1. 6

                                I’d suggest installing an ad blocker. Not to read this particular blog post so much as because this is a good illustration of just how rampant malicious ads are on ad networks these days.

                                1. 1

                                  Yeah, I tried on my phone and got lucky.

                                2. 2

                                  Ugh, sorry about that. I guess that’s just wordpress? I don’t know of a better place to have my blog at.

                                  1. 5

                                    github pages seems to be trustworthy still

                                    1. 2

                                      Use GitHub Pages. Or DEV. Or NeoCities. Or, heck, even Medium (Medium has four different analytics packages, but even they don’t use an ad network).

                                      1. 2

                                        I like GitLab Pages

                                        1. 2

                                          I self-host Wordpress on my friend’s Dreamhost instance. Could open up a spot for you. :-)

                                          1. 1

                                            SDF offers comparatively cheap hosting: https://sdf.org/?tutorials#web

                                        1. 7

                                          Note: The update is that it turns out that they do allow it (if you disable secure booting), but there is no driver for the T2/SSD in Linux yet.

                                          1. 1

                                            It is not clear that it is simply a driver issue. It may be that the T2/SSD is actually locked out of use. I have seen no hard information on this either way.

                                            1. 2

                                              At least it sounds like it isn’t intentional if this posting on the comments of the article is legitimate (contains link to image of a twitter conversation with Craig Federighi): https://bbs.boingboing.net/t/apples-new-bootloader-wont-let-you-install-gnu-linux-updated/132982/2

                                              1. 1

                                                Looking at the stackexchange link, turns out T2 does work as a “normal” NVMe controller!

                                                But it shuts down the whole machine in 10-30 seconds, because it seems to detect an unauthorized OS.

                                                Maybe a driver could be made that does whatever a bootcamped Windows 10 does to appear legitimate…

                                            2. 1

                                              I’m no fan of Apple’s recent Mac hardware, but my first thought was “Don’t most people turn off Secureboot by default anyway?” :)

                                              1. 3

                                                For most values of “most people”, I’d find that hard to believe tbh

                                                1. 2

                                                  I have utterly anecdotal evidence that says that whenever anyone has problems with any UEFI Linux install the first suggested step for remediation is “Turn off Secureboot.”

                                                  1. 7

                                                    Most people don’t have problems with UEFI Linux because most people don’t install Linux onto hardware that didn’t ship with it.

                                                    1. 1

                                                      OK I’m coming from a place of ignorance so I’ll bite. Do you have actual data on that? I don’t get that impression from the various mailing lists, forums etc. Do people posting quesetions there represent the vocal minority?

                                                      1. 3

                                                        Linux: 2.04%, macOS: 9.40%, Windows: 88.05%

                                                        Linux has a much higher market share on smartphones, but those ship with it, rather than having the end-user install it.

                                                        1. 1

                                                          I’m seeing several people, including you, dance around the actual question.

                                                          “Of Linux users, how many install Linux on machines themselves and how many buy machines that come with Linux pre-installed?”

                                                          Your data has no bearing on this question.

                                                          Or did I mis-understand what we were actually talking about here?

                                                          1. 1

                                                            The desktop, where Linux is not shipped on most machines, has few Linux users.

                                                            Smartphones and servers, where Linux is very popular, either ship with Linux or with no operating system at all.

                                                            1. 1

                                                              Knowing that few end-user systems (desktop, laptop, notebook, convertible, tablet, whatever…) are shipped with Linux it becomes clear that there are in fact many who install Linux on systems which did not come with it. To get at actual numbers you’d have to get sales data from the likes of Dell and HP as well as those from companies which specialise in Linux systems.

                                                              Another way to gage the interest in installing Linux on non-Linux systems is by looking at the mailing lists and forums for Linux distributions where you’ll find a plethora of questions and answers on the subject of installing Linux on this or that system. Yet another way is to look for the number of downloads for Linux distributions. Some distributions keep a tally of how many installs there are, these can provide some data.

                                                              Remember the times when people spoke about the ‘Microsoft Tax’ or ‘Windows Tax’ which was levied when buying hardware? Those were people who wanted to install something else on their store-bought systems, usually some form of Linux, less frequently a BSD or something else.

                                                              Seen as a percentage of total sales the number of systems which are destined to have their Windows replaced with Linux is small. However, this small percentage still represents a considerable number of systems. How many of those come with UEFI and ‘secure boot’ remains to be seen, often Linux is installed after the machine has been used previously with Windows and as such the machines on which Linux is installed are often from a previous generation. These machines often did not come with/were not encumbered with UEFI and ‘secure boot’.

                                                        2. 2

                                                          Linux desktop marketshare is relatively (to the size of overall desktop usage) still a small percentage (from what I understand), so presumably “most people” don’t ever install Linux at all?
                                                          Of those that do use Linux, it would certainly be interesting to see how many of them run on systems that both support secureboot, and have it enabled.

                                              1. 8

                                                The timenow.New accepts a pointer to http.Client, so we can pass nil if we want to let the function itself to instantiate the dependency, or we can specify it.

                                                Ugh. In an API intended for consumption by downstream users, magic parameters intended for consumption only by the framework’s developers to accomplish internal testing of the framework’s implementation are an awful architectural pattern that unfortunately have way too much industry currency as “the right way to solve mocking”, currently.

                                                I’ve seen far too many codebases with constructor parameter lists that are an absolute grab-bag of random dependencies with no clear meaning to end users trying to instantiate anything other than “just go with the defaults that magically appear”. It’s a bad pattern that reduces encapsulation (you’ve now given downstream consumers a lot of visibility and reach into your implementation internals, to the point where you’ve got no guarantee that you’re relying on what you expect you’re relying on at runtime), and it seriously harms the communication of intention to other developers who don’t already have the framework’s internal implementation in their heads (because now you’ve confused the public API with a bunch of information about the API’s implementation)

                                                There are other ways to solve this problem. If you embed httpClient in a wrapper struct, timeNow can depend, directly and internally, on your wrapper, and a good build tool will let you expose an alternate implementation of the wrapper to timeNow when compiling it for testing.

                                                “Injecting” mocked dependencies at test compile time is going to be a lot cleaner in the long-run than watching every constructor in your application explode into a grab-bag of dynamic injection points that are never expected to dynamically vary during a production run anyways.

                                                1. 5

                                                  The strictly-better solution would be to define

                                                  type Doer interface {
                                                      Do(*http.Request) (*http.Response, error)
                                                  }
                                                  

                                                  which is implicitly satisfied by net/http.Client. The timenow constructor would take an instance of that interface rather than a concrete object, which would permit the real client in production code, and a mock client in tests.

                                                  1. 3

                                                    I worked at a company where we did this , but in Objective-C. So I simply had two constructors: one with explicit dependencies, and one that conjures them up and then calls the first one, which I think is slightly nicer than the solution presented in the post. The DI version can be in a private header and no one else needs to care about it.

                                                    1. 1

                                                      Do you have golang exemples with your solution? I’d be interested to have the same example as presented but as you describe it!

                                                    1. 4

                                                      In my experience, there has always been more nuance between when to use ORMs/query builders/raw queries. There’s no reason why you can’t use an ORM for inserts/updates/simple selects and drop down to raw SQL for more complex queries. Query builders vs. raw SQL is another tradeoff: with query builders you have to learn new syntax, and it makes things more difficult to debug in production ( especially when paged at 3AM ;) ). With raw SQL though, you can write broken/insecure queries.

                                                      1. 6

                                                        One thing that happens with ORM, at least in the 20 odd years I’ve been grappling with them, is that they encourage object thinking, which is almost without fail a gigantic detriment to the design of the database. ORM thinking is closely correlated with the “nobody will ever access this data without the gigabytes of runtime” school of thought, and tends to impose severely suboptimal decisions onto the database, which is treated as a simple persistence layer.

                                                        1. 1

                                                          In my experience that has less to do with ORMs and more to do with people designing their objects before (or instead of) designing their tables. Hand-rolling all of your SQL won’t save you if you didn’t put the effort into the table-design (although it might alert you to how dogshit your tables are to work with, that assumes you didn’t farm all of the boilerplate queries out to some junior who doesn’t know any better and doesn’t complain, and you’re still stuck refactoring your objects if you started the wrong-way around). It’s the upfront table design that’s truly the important thing, not the mechanics of how queries get written afterwards.

                                                          With proper upfront Table design, I’ve found ORMs useful for eliminating a lot of boilerplate scut-work, while still allowing direct access to SQL whenever you need to get smarter with your queries.

                                                          1. 2

                                                            But I don’t deny that actually modelling your data before getting started is critical.

                                                            1. 1

                                                              Well, sure. But ORMs push the pain further out until it’s probably too late to fix the catastrophe.

                                                            2. 1

                                                              Biggest complaint for me is ORMs for updating information on data objects that are deeply nested.

                                                              From a performance standpoint, what should be a one-off update_one or UPDATE turns into the clown car of loading in the object, fetching associations, eventually setting a value, and shoving it all back into the DB.

                                                              Granted, I also hate deeply-nested objects, soooooo…

                                                              1. 1

                                                                Relatedly, transactionally updating a single field via an ORM is a deadlock footgun (since it’ll fetch without using for update, then try to lock the record).

                                                          1. 3

                                                            Um. Following this link I got redirected to some kind of spam website, that was blocked by my browser.

                                                            Edit: clicked through a bunch more times to try and reproduce, got something slightly different:

                                                            1. 2

                                                              I’ve seen this on compromised WordPress sites before. If it’s the same as what I investigated previously, they do something like push the spam/ad/etc. to 1% of traffic and that makes it difficult to inspect/discover.

                                                              1. 1

                                                                Does it say Comcast in there? Could that be targeted to that connection?

                                                                1. 1

                                                                  That’s… worrying. It’s a bog-standard wordpress site. What happens if you go to https://zwischenzugs.com?

                                                                  1. 1

                                                                    I clicked through a dozen times and nothing happened. It definitely didn’t happen every time on the original link either.

                                                                    1. 11

                                                                      Looks like it’s a malicious ad coming in. Hard to say which ad network it came from, since the site is loading an obscene number of them…

                                                                1. 9

                                                                  This post seems confused. The term “Open Source” was specifically created to denote source which is open but not “Free” in the “Free Software” sense. That a specific group (the OSI) later co-opted the term and took an overly narrow definition in order to please Corporations who didn’t want to see the movement create software they were unable to exploit is uninteresting.

                                                                  Open source is simply what it says on the tin – any source that is open. To pretend otherwise is to insist that Up is Down and Down is Up – openly viewable and shareable software that is simply not commercially exploitable becomes suddenly “closed” in some nonsense confusion of semantics.

                                                                  Lambasting Open Source for not being Free for commercial exploitation is like lambasting a jar labeled Chunky Peanut butter for not being Smooth. Nobody was pretending it was what you’re complaining it’s not.

                                                                  1. 6

                                                                    Open source and free software are almost identical in meaning. Compare their definitions:

                                                                    https://opensource.org/osd

                                                                    https://www.gnu.org/philosophy/free-sw.en.html

                                                                    The list of free software licenses and open source software licenses is also almost identical. Only a small number of obscure licenses meet one definition and not the other.

                                                                    You may be thinking of copyleft licenses, like the GPL.

                                                                    1. 5

                                                                      Open source and free software are almost identical in meaning. Compare their definitions:

                                                                      Yes, this is a later development that I specifically think is an utter load of crap that a single group (the OSI) is attempting to dictate to everyone in disregard of common sense and basic English semantics

                                                                      1. 7

                                                                        This isn’t common sense and basic English semantics. This is an important step in defending the open source community against bad actors which do exist, in great numbers and with great resources. I can’t sell you a Jeep advertised as a Honda and cite “common sense” because my family used to own a Honda and we called all cars Hondas thenceforth. That’s not how language works. We have commonly agreed upon definitions for things and people who abuse the terminology to promote products for which these terms are inappropriate are liars.

                                                                        1. 4

                                                                          I can’t sell you a Jeep advertised as a Honda and cite “common sense” because my family used to own a Honda and we called all cars Hondas thenceforth.

                                                                          A better analogy here would be if Honda had seized the trademark for “blue car” and insisted that it was unethical to refer to anything, no matter how plainly blue and car-shaped, other than a Blue Honda as a “blue car”.

                                                                          “Open” and “source” are not some meaningless trademarks that the OSI owns. They mean something, both separately and in combination, and it’s not what the OSI wants to dictate that it means. I’m not buying their attempts to re-define the English language, sorry.

                                                                          1. 3

                                                                            Okay, so it’s like selling a toy car as a car, then.

                                                                            Your attempts to avoid “redefining the English language” are exceedingly silly and baseless. Just use the term “source available” like everyone else. Instead you’re sowing confusion and discontent where it really doesn’t matter to you, and really does matter to everyone else.

                                                                            1. 1

                                                                              Okay, so it’s like selling a toy car as a car, then.

                                                                              No it’s like insisting that something that is clearly a toy and plainly a car is for some reason not permitted to use the term “toy car” because you won’t allow IBM to make money off of it without compensating the person who die-cast it.

                                                                              Instead you’re sowing confusion and discontent where it really doesn’t matter to you, and really does matter to everyone else.

                                                                              Oh good lord. It’s free software extremists who are “sewing confusion and discontent” by appointing themselves Thought Police and decreeing that when referring to source code which is plainly open, the combination of words “open” and “source” is utterly verboten and some kind of attempt at fraud.

                                                                              1. 5

                                                                                If you don’t grab it, someone else will:

                                                                                https://www.infoworld.com/article/2671387/operating-systems/linus-gets-tough-on-linux-trademark.html

                                                                                Torvalds didn’t plan on gaining trademark protection for the word “Linux” when he began work on his OS, but by 1996 he started wishing he had. That’s when William R. Della Croce Jr. of Boston first started demanding 10 percent royalties on sales from Linux vendors, based on a trademark claim he had filed in 1994. The Linux kernel was still free software, but according to Della Croce, the name itself was his property.

                                                                                As bad as you imagine the current status of the term “Open Source” is now, it would be infinitely worse had Microsoft been able to grab up the term in the 1990s and use it as a cudgel against people who self-applied the term to what’s currently widely known as Open-Source Software. And the Ballmer “The GPL Is Cancer” Microsoft might well have.

                                                                                1. 0

                                                                                  You can’t just decide you want a trademark and have it because nobody else has registered it yet. Linux existed before the 1994 trademark, and it was thus completely invalid.

                                                                                  1. 4

                                                                                    You can’t just decide you want a trademark and have it because nobody else has registered it yet.

                                                                                    That’s probably why Della Croce eventually lost the court case and, thus, the trademark. However, there still had to be a court case, because he was still issued the trademark in the first place, which likely wouldn’t have happened had Torvalds or some other valid entity gotten the trademark first.

                                                                                  1. 0

                                                                                    Yes, you’re the egg. Congrats on figuring that out.

                                                                          2. 3

                                                                            Basic english meaning would suggest free software is simply proprietary software you don’t have to pay for.

                                                                        2. -2

                                                                          The term “Open Source” was specifically created because the term “Free Software” was loaded with political baggage (RMS is a socialist, let’s not kid ourselves) and confusion (“free as in speech” vs “free as in beer”). It was specifically pushed by the Mozilla project, who always released their stuff under Free Software licenses, and commercial Linux vendors, who are obviously relying on the code being free for commercial exploitation.

                                                                          https://en.wikipedia.org/wiki/Open-source_model#Open_source_as_a_term

                                                                          1. 13

                                                                            RMS is a socialist

                                                                            Only if you don’t know what “socialist” means.

                                                                            1. 3

                                                                              The wikipedia entry is ahistorical – the phrase “open source” predates the Free Software movement’s co-opting of it by years, if not decades – since it is the single most obvious and commonly used descriptor of source code that is open for viewing.

                                                                              Mark Travers wrote a bit about the problem with the co-opting of the term, and resulting semantic confusion that the Free Software people are engaged in, here: http://marktarver.com/problems.html

                                                                              Hey look: here’s the term “open source code” being used years before Mozilla was formed and years before ESR “invented” the term, to describe something where the sourcecode was open but which couldn’t be used commercially. http://www.xent.com/FoRK-archive/fall96/0269.html

                                                                              1. 4

                                                                                Thank you for actually providing a source! Unfortunately, it is just another person from 2009 asserting the same thing you did, while Wikipedia actually links to a primary source from 1998.

                                                                                Can you link to someone, maybe an academic studying the history of the term, or an old Usenet post from before 1998?

                                                                                (by the way, I’m not disagreeing with the core of that article: FOSS has hardly lived up to the promises that people made of it)

                                                                                1. 7

                                                                                  Check the Edit. That’s a usage from ’96,

                                                                                  Or hell, here’s some random post from 1990 on comp.sys.amiga https://tech-insider.org/personal-computers/research/1990/1126.html

                                                                                  BSD’s open source policy meant that user developed software could be ported among platforms, which meant their customers saw a much more cost effective, leading edge capability combined hardware and software platform. The marketplace saw SYSV as junk, and the AT&T platforms running it did so poorly in the market, AT&T did massive layoffs for the first time in their history, to make up for the losses.

                                                                                  I’d go on, but Google seems to have made it damn near impossible to search usenet. It’s not a particularly hard to hit upon term, anyways.

                                                                                  The OSI’s position on all of this, by the way, is that these pre-existing usages don’t count as pre-existing usages of “open source” because they didn’t mean what the OSI means by “open source”. Which is rather like Honda claiming that there were no pre-existing usages of the phrase “blue car” prior to Honda insisting it means specifically and only blue Hondas, because any prior usages of “blue car” didn’t specifically mean Hondas. It’s bafflegab.

                                                                                  1. 1

                                                                                    Okay, that’s actually really cool thing to know. Thanks for the info!

                                                                              2. 0

                                                                                “Free Software” was loaded with political baggage

                                                                                No, it isn’t. There’s nothing political about software freedom. It’s an ethical issue. Where is the free software political party? Where are the free software policies of different political parties? Nowhere. ‘Political’ doesn’t mean ‘controversial’.

                                                                                (RMS is a socialist, let’s not kid ourselves)

                                                                                Grow up. RMS is not a socialist in any sense of the word.

                                                                                1. 4

                                                                                  “No, it isn’t. There’s nothing political about software freedom.”

                                                                                  Its existence and enforcement in a particular country is determined by its politics (i.e. laws) and those of countries it signs agreements with. Yes, there’s plenty political about software freedom. It’s even enforced with copyright law in most cases. If you think politics don’t matter, then you might think downloading software off the Net can never result in fines or prison time. Or software freedom can never be enforced at all since it would depend on laws created through politics.

                                                                                  1. 1

                                                                                    Its existence and enforcement in a particular country is determined by its politics (i.e. laws) and those of countries it signs agreements with.

                                                                                    Laws are not automatically political.

                                                                                    1. 1

                                                                                      If it’s a democracy, then the laws are introduced by people, debated by people, possibly passed by those people, and debated/interpreted by other people in courts later. I mean, government itself is a political process. So, I don’t even need to say that much to show laws are political. Especially copyright and patent law which has extra element of lobbying by big companies that shaped the laws to mainly benefit them.

                                                                                      1. 1

                                                                                        By that logic everything is political.

                                                                                        Software freedom is not a political issue, and RMS is not being political when he pushes for software freedom, and the FSF is not political, and copyleft is not ‘more political’ then permissive.

                                                                                        1. 2

                                                                                          You think RMS and FSF aren’t political in their goals or activities? I think I’ll end our conversation on saying you’re the first I’ve heard ever say that.

                                                                                          1. 1

                                                                                            Correct. I don’t think there’s anything remotely political about free software.

                                                                                  2. 3

                                                                                    Where are the free software policies of different political parties?

                                                                                    Anything involving copyright reform (the Pirate Party), Internet regulation (net neutrality), or the limits of proprietary EULAs and ToS documents (TIVOization) is important for the future of FOSS, to enable software development outside entrenched commercial entities.

                                                                                    Also, the ethical is political.

                                                                                    1. 0

                                                                                      Anything involving copyright reform (the Pirate Party),

                                                                                      Free software doesn’t require copyright reform, nor would it benefit from it. Free software licenses use copyright.

                                                                                      Internet regulation (net neutrality)

                                                                                      Nothing to do with free software.

                                                                                      or the limits of proprietary EULAs and ToS documents (TIVOization)

                                                                                      Nothing to do with free software.

                                                                                      Also, the ethical is political.

                                                                                      Nothing in that link backs up your claim that anything ethical is automatically political. Just because you live in a country where everything and anything is politicised doesn’t mean we all do.

                                                                                      1. 3

                                                                                        Free software doesn’t require copyright reform, nor would it benefit from it. Free software licenses use copyright.

                                                                                        Copyright enforcement technology (DRM) has practical effects on people’s ability to write and modify software that operates on copyrighted works. There’s a reason why the Free Software Foundation runs the “Defective by Design” campaign. The usefulness of free decoding software is hampered if all of the best videos that you would want to decode are encrypted and can only be used with pristine proprietary ones.

                                                                                        Also, because free software licenses do use copyright, copyright law does matter.

                                                                                        Internet regulation (net neutrality)

                                                                                        Nothing to do with free software.

                                                                                        Like in the case of video decoding software, the legal right to write your own network-facing software isn’t worth very much if only proprietary ones have the privilege of running on the internet.

                                                                                        or the limits of proprietary EULAs and ToS documents (TIVOization)

                                                                                        Nothing to do with free software.

                                                                                        GNU General Public License version 3 disagrees.

                                                                                        1. 1

                                                                                          Copyright enforcement technology (DRM) has practical effects on people’s ability to write and modify software that operates on copyrighted works.

                                                                                          Which has nothing to do with free software. They wouldn’t be legally allowed to modify that software anyway. Free software movement is not one that wants proprietary software to be illegal, it just advocates for people to make their software free.

                                                                                          The usefulness of free decoding software is hampered if all of the best videos that you would want to decode are encrypted and can only be used with pristine proprietary ones.

                                                                                          But they aren’t only able to be used with proprietary ones. Software freedom is not about some right to just do whatever you want with things other people have ownership of.

                                                                                          Like in the case of video decoding software, the legal right to write your own network-facing software isn’t worth very much if only proprietary ones have the privilege of running on the internet.

                                                                                          But they don’t. Net neutrality is overblown nonsense anyway. Most countries don’t have it, none have any problem not having it, the US didn’t have it until very recently then got rid of it again and nothing has changed.

                                                                                          GNU General Public License version 3 disagrees.

                                                                                          The GPLv3 isn’t a proprietary EULA or TOS agreement, and as such it has nothing to do with the limits of proprietary EULAs or TOS agreements. I don’t really see how you find this so difficult to understand. The GPLv3 stops you from producing TIVOised derivative products. That doesn’t mean the FSF wants TIVOisation to be illegal.

                                                                              1. 21

                                                                                So I think I’m a bit late for the big go and rust and garbage collection and borrow checker discussion, but it took me a while to digest, and came up with the following (personal) summary.

                                                                                Determining when I’m done with a block of memory seems like something a computer could be good at. It’s fairly tedious and error prone to do by hand, but computers are good at monotonous stuff like that. Hence, garbage collection.

                                                                                Or there’s the rust approach, where I write a little proof that I’m done with the memory, and then the computer verifies my proof, or rejects my program. Proof verification is also something computers are good at. Nice.

                                                                                But writing the proof is still kind of a pain in the ass, no? Why can’t I have computer generated proofs? I have some memory, I send it there, then there, then I’m done. Go figure out the refs and borrows to make it work, kthxbye.

                                                                                1. 18

                                                                                  But writing the proof is still kind of a pain in the ass, no? Why can’t I have computer generated proofs? I have some memory, I send it there, then there, then I’m done. Go figure out the refs and borrows to make it work, kthxbye

                                                                                  I’m in the middle of editing an essay on this! Long story short, proving an arbitrary code property is undecidable, and almost all the decidable cases are in EXPTIME or worse.

                                                                                  1. 10

                                                                                    I’m kinda familiar with undecidable problems, though with fading rigor these days, but the thing is, undecidable problems are undecidable for humans too. The impossible task becomes no less impossible by making me do it!

                                                                                    I realize it’s a pretty big ask, but the current state of the art seems to be redefine the problem, rewrite the program, find a way to make it “easy”. It feels like asking a lot from me.

                                                                                    1. 10

                                                                                      The problem is undecidable (or very expensive to decide) in the most general case; what Rust does is solve it in a more limited case. You just have to prove that your usage fits into this more limited case, hence the pain in the ass. Humans can solve more general cases of the problem than Rust can, because they have more information about the problem. Things like “I only ever call function B with inputs produced from function A, function A can only produce valid inputs, so function B doesn’t have to do any input validation”. Making these proofs without computer assistance is no less of a pain in the ass. (Good languages make it easy to enforce these proofs automatically at compile or run time, good optimizers remove redundant runtime checks.)

                                                                                      Even garbage collectors do this; their safety guarantees are a subset of what a perfect solution would provide.

                                                                                      1. 3

                                                                                        “Humans have more information about the problem”

                                                                                        And this is why a conservative borrower checker is ultimately the best. It can be super optimal, and not step on your toes. It’s up to the human to adjust the lifetime of memory because only the human knows what it wants.

                                                                                        I AM NOT A ROBOT BEEP BOOP

                                                                                      2. 3

                                                                                        Humans have a huge advantage over the compiler here though. If they can’t figure out whether a program works or not, they can change it (with the understanding gained by thinking about it) until they are sure it does. The compiler can’t (or shouldn’t) go making large architectural changes to your code. If the compiler tried it’s hardest to be as smart as possible about memory, the result would be that when it says “I give up, the code needs to change” the human who can change the code is going to have a very hard time understanding why and what they need to change (since they haven’t been thinking about the problem).

                                                                                        Instead, what Rust does is apply as intelligent a set of rules they could that produce consistent understandable results for the human. So the compiler can say “I give up, here’s why”. And the human can say “I know how the compiler will work, it will accept this this time” instead of flailing about trying to convince the compiler it works.

                                                                                        1. 1

                                                                                          I realize it’s a pretty big ask

                                                                                          I’ve been hearing this phrase lately “big ask” from business people generally, seems very odd to me. Is it new or have I just missed it up to now?

                                                                                          1. 2

                                                                                            I’ve been hearing it from “business people” for a couple years at least, I assume it’s just diffusing out slowly to the rest of society.

                                                                                            The new one I’m hearing along these lines is “learnings”. I think people just think it makes them sound smart if they use different words.

                                                                                            1. 1

                                                                                              A “learning”, as a noun, is attested at least as far back as the early 1900s, FYI.

                                                                                              1. 0

                                                                                                This sort of comment annoys me greatly. Someone used a word incorrectly 100 years ago. That doesn’t mean it’s ‘been a word for 100 years’ or whatever you’re implying. ‘Learning’ is not a noun. You can argue about the merits of prescriptivism all you like, you can have whatever philosophical discussion you like as to whether it’s valid to say that something is ‘incorrect English’, but ‘someone used it in that way X hundred years ago’ does not justify anything.

                                                                                                1. 2

                                                                                                  This sort of comment annoys me greatly. Someone used a word incorrectly 100 years ago. That doesn’t mean it’s ‘been a word for 100 years’ or whatever you’re implying. ‘Learning’ is not a noun.

                                                                                                  It wasn’t “one person using it incorrectly” that’s not even remotely how attestation works in linguistics. And of course, of course it is very much a noun. What precisely, man, do you think a gerund is? We have learning curves, learning processes, learning centres. We quote Pope to one another when we say that “a little learning is a dangerous thing”.

                                                                                                  To take the position that gerunds aren’t nouns and cannot be pluralized requires objecting to such fluent Englishisms as “the paintings on the wall”, “partings are such sweet sorrow”, “I’ve had three helpings of soup”

                                                                                                  1. 0

                                                                                                    ‘Painting’ is the process of painting. You can’t pluralise it. It’s also a (true) noun, the product of doing some painting. There it obviously can be pluralised. But ‘the paintings we did of the house kept improving the sheen of the walls’ is not valid English. They’re different words.

                                                                                                    1. 2

                                                                                                      LMAO man, how do you think Painting became a “true” noun? It’s just a gerund being used as a noun that you’re accustomed to. One painted portraits, landscapes, still lifes, studies, etc. To group all these things together as “paintings” was an instance of the exact same linguistic phenomenon that gives us the idea that one learns learnings.

                                                                                                      You’re arguing against literally the entire field of linguistics here on the basis of gut feelings and ad hoc nonsense explanations.

                                                                                                      1. 0

                                                                                                        You’re arguing against literally the entire field of linguistics here on the basis of gut feelings and ad hoc nonsense explanations.

                                                                                                        No, I’m not. This has literally nothing to do with linguistics. That linguistics is a descriptivist scientific field has nothing to do with whether ‘learnings’ is a real English word. And it isn’t. For the same reason that ‘should of’ is wrong: people don’t recognise it as a real word. Words are what we say words are. People using language wrong are using it wrong in the eyes of others, which makes it wrong.

                                                                                                        1. 1

                                                                                                          That linguistics is a descriptivist scientific field has nothing to do with whether ‘learnings’ is a real English word. And it isn’t. For the same reason that ‘should of’ is wrong: people don’t recognise it as a real word. Words are what we say words are.

                                                                                                          Well, I hate to break it to you, but plenty of people say learnings is a word, like all of the people you were complaining use it as a word.

                                                                                                          1. 0

                                                                                                            There are lots of people that write ‘should of’ when they mean ‘should’ve’. That doesn’t make them rightt.

                                                                                                            1. 1

                                                                                                              Yes and OK is an acronym for Oll Korrect, anyone using it as a phrase is not OK.

                                                                                                              1. 0

                                                                                                                OK has unknown etymology. And acronyms are in no way comparable to simply incorrect grammar.

                                                                                                                1. 1

                                                                                                                  Actually it is known. Most etymologists agree that it came from Boston in 1839 originating in a satirical piece on grammar. This was responding to people who insist that English must follow some strict unwavering set of laws as though it were a kind of formal language. OK is an acronym, and it stands for Oll Korrect, and it was literally invented to make pedants upset. Certain people were debating the use of acronyms in common speech, and to lay it on extra thick the author purposefully misspelled All Correct. The word was quickly adopted because pedantry is pretty unpopular.

                                                                                                                  1. 1

                                                                                                                    What I said is that there is what is accepted as valid and what is not. Nobody educated thinks that ‘should of’ is valid. It’s a misspelling of ‘should’ve’. Nobody thinks ‘shuold’ is a valid spelling of ‘should’ either. Is this really a debate you want to have?

                                                                                                                    1. 1

                                                                                                                      I was (mostly) trying to be playful while also trying to encourage you to be a little less litigious about how people shuold and shuold not use words.

                                                                                                                      Genuinely sorry for making you actually upset though, I was just trying to poke fun a little for getting a bit too serious at someone over smol beans, and I was not trying to make you viscerally angry.

                                                                                                                      I also resent the attitude that someone’s grammatical or vocabulary knowledge of English represents an “education”.

                                                                                            2. 1

                                                                                              It seems like in the last 3 years all the execs at my company started phrasing everything as “The ask is…” I think they are trying to highlight that you have input (you can answer an ask with no) vs an order.

                                                                                              In practice, of course, many “asks” are orders.

                                                                                              1. 4

                                                                                                Sure, but we already have a word for that, it’s “request”.

                                                                                                1. 4

                                                                                                  Sure, but the Great Nouning of Verbs in English has been an ongoing process for ages and continues apace. “An ask” is just a more recent product of the process that’s given us a poker player’s “tells”, a corporation’s “yearly spend”, and the “disconnect” between two parties’ understandings.

                                                                                                  All of those nouned verbs have or had perfectly good non-nominalized verb nouns, at one point or another in history.

                                                                                                  1. 1

                                                                                                    One that really upsets a friend of mine is using ‘invite’ as a noun.

                                                                                              2. 1

                                                                                                Newly popular? MW quotes this usage and says Britishism.

                                                                                                https://www.merriam-webster.com/dictionary/ask

                                                                                                They don’t date the sample, but I found it’s from a 2008 movie review.

                                                                                                https://www.spectator.co.uk/2008/10/cold-comfort/

                                                                                                So at least that old.

                                                                                            3. 3

                                                                                              You no doubt know this, but the undecidable stuff mostly becomes decidable if you’re willing to accept a finite limit on addressable memory, which anyone compiling for, say, x86 or x86_64 is already willing to do. So imo it’s the intractability rather than undecidability that’s the real problem.

                                                                                              1. 1

                                                                                                It becomes decidable by giving us an upper bound on the number of steps the program can take, so should require us to calculate the LBA equivalent of a very large BB. I’d call that “effectively” undecidable, which seems like it would be “worse” than intractable.

                                                                                                1. 2

                                                                                                  I agree it’s, let’s say, “very” intractable to make the most general use of a memory bound to verify program properties. But the reason it doesn’t seem like a purely pedantic distinction to me is that once you make a restriction like “64-bit pointers”, you do open up a bunch of techniques for finite solving, some of which are actually usable in practice to prove properties that would be undecidable without the finite-pointer restriction. If you just applied Rice’s theorem and called verifying those properties undecidable, it would skip over the whole class of things that can be decided by a modern SMT solver in the 32-bit/64-bit case. Granted, most still can’t be, but that’s why the boundary that interests me more nowadays is the “SMT can solve this” vs. “SMT can’t solve this” one rather than the CS-theory sense of decidable/undecidable.

                                                                                            4. 6

                                                                                              Why can’t I have computer generated proofs? I have some memory, I send it there, then there, then I’m done.

                                                                                              It’s really hard. The main tool for that is separation logic. Manually doing it is harder than borrow-checking stuff. There are people developing solvers to automate such analyses. Example. It’s possible what you want will come out of that. I think there will still be restrictions on coding style to ease analyses.

                                                                                              1. 3

                                                                                                In my experience, automated proof generators are very leaky abstractions. You have to know their search methods in detail, and present your hypotheses in a favorable way for those methods. It can look very clean, but it can mean that seemingly easy changes turn out to be frustrated by the methods’ limitations.

                                                                                                1. 4

                                                                                                  I’m totally with you on this. Rust very much feels like an intermediate step and I don’t know why they didn’t take it to it’s not-necessarily-obvious conclusion.

                                                                                                  1. 5

                                                                                                    In my personal opinion, it might be just that we’re happy that we can actually get to this intermediate point (of Rust) reliably enough, but have no idea yet how to get to the further point (conclusion). So they took it where they could, and left the subsequent part as an excercise for the reader… I mean, to be explored by future generations of programmers, hopefully.

                                                                                                    1. 4

                                                                                                      We have the technology, sort of. Total program analysis is really expensive though, and the workflow is still “edit some code” -> “compile on a laptop” -> repeat. Maybe if we built a gc’ed language that had a mode where you push your program to a long running job on a compute cluster to figure out all the memory proofs.

                                                                                                      This would be especially cool if incrementals could be cached.

                                                                                                      1. 4

                                                                                                        I’ve recommended that before. There’s millions being invested into SMT/SAT solvers for common bugs that might make that happen, too. Gotta wait for the tooling to catch up. My interim recommendation was a low-false-positive, static-analysis tool like RV-Match to be used on everything in the fast path. Anything that passes is done no GC. Anything that hangs or fails is GC’d. Same with automated proofs to eliminate safety checks. If it passes, remove that check if that’s what pass allows. If it fails, maybe it’s safe or maybe tool is too dumb. Keep the check. Might not even need cluster given number of cores in workstations/servers and efficiency improvements in tools.

                                                                                                      2. 4

                                                                                                        I think it’s because there’s essentially no chance that a random piece of code will be provable in such a way. Rust encourages, actually to the point of forcing, the programmer to reason about lifetimes and ownership along with other aspects of the type as they’re constructing the program.

                                                                                                        I think there may be a long term evolution as tools get better: the languages checks the proofs (which, in my dream, can be both types and more advanced proofs, say that unsafe blocks actually respect safety), and IDE’s provide lots of help in producing them.

                                                                                                        1. 2

                                                                                                          there’s essentially no chance that a random piece of code will be provable in such a way

                                                                                                          There must be some chance; rust is already proving memory safety.

                                                                                                          Rust forces us to think about lifetimes and ownership, but to @tedu’s point, there doesn’t seem be much stopping it from inferring those lifetimes & ownership based upon usage. The compiler knows everywhere a variable is used, why can’t it determine for us how to borrow it and who owns it?

                                                                                                          1. 17

                                                                                                            Rust forces us to think about lifetimes and ownership, but to @tedu’s point, there doesn’t seem be much stopping it from inferring those lifetimes & ownership based upon usage. The compiler knows everywhere a variable is used, why can’t it determine for us how to borrow it and who owns it?

                                                                                                            This is a misconception. The Rust compiler does not see anything beyond the function boundary. That makes lifetime checking efficient. Basically, when compiling a function, the compiler makes an reasonable assumption about how input and output references are connected (the assumption is “they are connected”, also known as “lifetime elision”). This is an assumption communicated the outside world. If this assumption is wrong, you need to annotate lifetimes.

                                                                                                            When compiling, the compiler will check if the assumption holds for the function body. So, for every function call, it will check if the the signature holds (lifetimes are part of the function signature).

                                                                                                            Note that functions with different lifetime annotations taking the same data might differ in their behaviour. It also isn’t always obvious to the compiler whether you want references to be bound together or not and that situation might be ambigous.

                                                                                                            The benefit of this model is that functions only need to be rechecked/compiled when they actually change, not some other code somewhere else in the program. It’s very predictable and errors are local to the function.

                                                                                                            1. 2

                                                                                                              I’ve been waiting for you @skade.

                                                                                                              1. 2

                                                                                                                Note that functions with different lifetime annotations taking the same data might differ in their behaviour.

                                                                                                                I wrote this late at night and have some errata here: they might differ in their behaviour wrt. lifetime checking. Lifetimes have no impact on the runtime, an annotation might only prove something safe that the compiler previously didn’t see as safe.

                                                                                                              2. 4

                                                                                                                Maybe I’m misunderstanding. I’m interpreting “take it to its conclusion” as accepting programs that are not annotated with explicit lifetime information but for which such an annotation can be added. (In the context of Rust, I would consider “annotation” to include choosing between &, &mut, and by-move, as well as adding .clone() when needed, especially for refcount types, and of course adding explicit lifetimes in cases that go beyond the present lifetime elision rules, which are actually pretty good). My point is that such a “smarter compiler” would fail a lot of the time, and that failures would be mysterious. There’s a lot of experience around this for analyses where the consequence of failure is performance loss due to not being able to do an optimization, or false positives in static analysis tools.

                                                                                                                The main point I’m making here is that, by requiring the programmer to actually provide the types, there’s more work, but the failures are a lot less mysterious. Overall I think that’s a good tradeoff, especially with the present state of analysis tools.

                                                                                                                1. 1

                                                                                                                  I’m interpreting “take it to its conclusion” as accepting programs that are not annotated with explicit lifetime information but for which such an annotation can be added.

                                                                                                                  I’ll agree with that definition

                                                                                                                  My point is that such a “smarter compiler” would fail a lot of the time, and that failures would be mysterious.

                                                                                                                  This is where I feel we disagree. I feel like you’re assuming that if we make lifetimes optional that we would for some reason also lose the type system. That was not my assumption at all. I assumed the programmer would still pick their own types. With that in mind, If this theoretical compiler could prove memory safety using the developer provided types and the inferred ownership, why would it still fail a lot?

                                                                                                                  where the consequence of failure is performance loss due to not being able to do an optimization

                                                                                                                  That’s totally understandable. I assume like any compiler, it would eventually get better at this. I also assume lifetimes become an optional piece of the program as well. Assuming this compiler existed it seems reasonable to me that it could accept and prove lifetimes provided by the developer along with inferring and proving on it own.

                                                                                                                  1. 3

                                                                                                                    Assuming this compiler existed it seems reasonable to me that it could accept and prove lifetimes provided by the developer along with inferring and proving on it own.

                                                                                                                    That’s what Rust does. And many improvements to Rust focus on increasing the number of lifetime patterns the compiler can recognize and handle automatically.

                                                                                                                    You don’t have to annotate everything for the compiler. You write code in patterns the compiler understands, and annotate things it doesn’t. So Rust has gotten easier and easier to write as the compiler gets smarter and smarter. It requires fewer and fewer annotations / unsafe blocks / etc as the compiler authors discover how to prove and compile more things safely.

                                                                                                                2. 4

                                                                                                                  Rust forces us to think about lifetimes and ownership, but to @tedu’s point, there doesn’t seem be much stopping it from inferring those lifetimes & ownership based upon usage. The compiler knows everywhere a variable is used, why can’t it determine for us how to borrow it and who owns it?

                                                                                                                  I wondered this at first, but inferring the lifetimes (among other issues) has some funky consequences w.r.t. encapsulation. Typically we expect a call to a function to continue to compile as long as the function signature remains unchanged, but if we infer the lifetimes instead of making them an explicit part of the signature, subtle changes to a function’s implementation can lead to new lifetime restrictions being inferred, which will compile fine for you but invisibly break all of your downstream callers.

                                                                                                                  When the lifetimes are an explicit part of the function signature, the compiler stops you from compiling until you either fix your implementation to conform to your public lifetime contract, or change your declared lifetimes (and, presumably, since you’ve been made conscious of the breakage in this scenario, notify your downstream and bump your semver).

                                                                                                                  It’s basically the same reason that you don’t want to infer the types of function arguments from how they’re used inside a function – making it easy for you to invisibly breaking your contract with the outside world is bad.

                                                                                                                  1. 3

                                                                                                                    I think this is the most important point here. Types are contracts, and contracts can specify far more than just int vs string. Complexity, linearity, parametricity, side-effects, etc. are all a part of the contract and the more of it we can get the compiler to enforce the better.

                                                                                                            2. 1

                                                                                                              Which is fine, until you have time or memory constraints that are not easily met by the tracing GC, which is all software of sufficient scale or complexity. At that point, you end up with half-assed and painful to debug/optimize manual memory management in the form of pools, ect.

                                                                                                              1. 1

                                                                                                                Or there’s the rust approach, where I write a little proof that I’m done with the memory, and then the computer verifies my proof, or rejects my program. Proof verification is also something computers are good at. Nice.

                                                                                                                Oh I wish that were how Rust worked. But it isn’t. A variant of Rust where you could actually prove things about your programme would be wonderful. Unfortunately, in Rust, you instead just have ‘unsafe’, which means ‘trust me’.

                                                                                                              1. 3

                                                                                                                The year of the Linux desktop is nigh!

                                                                                                                Or not. It’s crap like this that makes Linux a non-starter for most people.

                                                                                                                1. 10

                                                                                                                  What ‘crap’ is it exactly that makes Linux a non-starter for ‘most people’?

                                                                                                                  Absolutely terrible driver support for hardware because Nvidia are a shitty company? Intel chips have free software drivers integrated into the kernel before the systems are even released. Nvidia still doesn’t help nouveau at all essentially. That Nvidia are allowed to distribute their clearly-GPL-violating proprietary kernel modules is baffling to me.

                                                                                                                  Also the author would have probably been completely fine if he just bought the previous model of graphics card, for which nouveau works completely fine as far as I am aware. It’s hardly fair to compare an operating system where you have zero control over hardware whatsoever like Mac to Linux. Linux is expected to work perfectly with all new hardware that comes out, even with zero cooperation from the hardware vendors and all development basically being done by volunteers. Nobody complains that Mac doesn’t work on their random laptop they tried installing it on, or on some hardware it has never been tested with or developed for.

                                                                                                                  You know ‘most people’ don’t have a graphics card, right? That most people don’t need 60Hz 4k displays, and certainly not multiple of them. That most people just use a web browser anyway, and so don’t actually care if the GIMP does DPI scaling properly.

                                                                                                                  This post can basically be summed up as ‘Nvidia drivers are shit’. That’s an issue, but… that’s Nvidia for you.

                                                                                                                  1. 6

                                                                                                                    What ‘crap’ is it exactly that makes Linux a non-starter for ‘most people’?

                                                                                                                    The fact that whether or not any given application will do anything usable or sane with a) text, b) the rest of the interface, or c) both on a monitor resolution that’s been common for years is a complete crapshoot based on which hodgepodge of squirrelly UI frameworks its author happened to personally prefer, consistency be damned?

                                                                                                                    The fact that there’s even separate answers for text and everything else is a usability disaster, let alone the whole matrix of dependencies a user need to dig into to discover why their music player does one thing, their browser another, and their text editor yet a third thing.

                                                                                                                    It’s precisely crap like this that keeps me on OS X, which is by no means perfect, but at least applications look and behave a couple of orders of magnitude more consistently. Life’s too short to dig through this dogshit so I don’t have to squint at a screen. This stuff is a solved problem everywhere else.

                                                                                                                    1. -2

                                                                                                                      The fact that whether or not any given application will do anything usable or sane with a) text, b) the rest of the interface, or c) both on a monitor resolution that’s been common for years is a complete crapshoot based on which hodgepodge of squirrelly UI frameworks its author happened to personally prefer, consistency be damned?

                                                                                                                      It’s not the resolution that’s the issue. It’s that people want pixels on their monitor to not correspond to actual pixels. I have no idea why, but they do. I think it’s mostly a marketing gimmick.

                                                                                                                      It’s precisely crap like this that keeps me on OS X, which is by no means perfect, but at least applications look and behave a couple of orders of magnitude more consistently. Life’s too short to dig through this dogshit so I don’t have to squint at a screen. This stuff is a solved problem everywhere else.

                                                                                                                      If you don’t want to squint, don’t buy a monitor with tiny pixels you have to squint at.

                                                                                                                      1. 7

                                                                                                                        It would be nice if you could avoid condescending comments like this. Consider that your opinion is just an opinion, so it’s not necessarily correct, or at the very least not the best option for every single person out there, especially when literally millions of people clearly consider HiDPI screens useful.

                                                                                                                        Also, if you’re hoping to convince anyone of the merits of Linux, this is emphatically not the way to do it.

                                                                                                                        (FWIW, the comment you originally replied to wasn’t constructive either.)

                                                                                                                        1. 4

                                                                                                                          It’s not the resolution that’s the issue. It’s that people want pixels on their monitor to not correspond to actual pixels. I have no idea why, but they do. I think it’s mostly a marketing gimmick.

                                                                                                                          “72 DPI ought to be enough for anybody”

                                                                                                                          1. 2

                                                                                                                            It’s not the resolution that’s the issue. It’s that people want pixels on their monitor to not correspond to actual pixels. I have no idea why, but they do.

                                                                                                                            because reconfiguring/recompiling literally everything to use larger fonts and UI component measurements is not feasible

                                                                                                                        2. 2

                                                                                                                          I don’t disagree with you. The previous generation of cards doesn’t feature dual HDMI 2.0/DP 1.2 connectors for the low end (notably the passive ones). If I would have known the problem with the drivers, I would have brought a Radeon card, even with the additional fan.

                                                                                                                          1. 2

                                                                                                                            To be fair, the article reads like he made every bad decision possible.

                                                                                                                            • Accepting a faulty product? Check!
                                                                                                                            • Buying Nvidia? Check!

                                                                                                                            This sounds a lot like “I want to learn sailing, so I bought this bike, and now I realized it doesn’t even float!”.

                                                                                                                          2. 3

                                                                                                                            Not just newcomers. Crap like this made me move to Mac after >20 years of using mainly Linux.

                                                                                                                            My gaming box is happily running Arch Linux, though. Steam is very good, and Proton is slowly widening the compatibility to a point where I’m missing nothing.

                                                                                                                            1. 5

                                                                                                                              Crap like this made me move to Mac after >20 years of using mainly Linux.

                                                                                                                              What crap specifically is it that made you move to Mac? Because I can name a lot of crap that made me move away from Mac back to Linux after experimenting with it by buying a Macbook Pro a few years ago, like being tied down to a terrible proprietary operating system missing all the useful features I want.

                                                                                                                              1. 3

                                                                                                                                I personally don’t like Mac either but it is not a terrible OS, it just isn’t made for people like you and me. (For me the worst parts are the cmd-tab, the keyboard layout and the global menu. Typical Mac users love their cmd-tab and global menu and didn’t complain about the keyboard until recently.)

                                                                                                                              2. 1

                                                                                                                                Arch Linux

                                                                                                                                That might be your problem. If you like to have a working computer and want the latest software, try Fedora. If you like to have a working computer and don’t care about the latest software, use Ubuntu. Very few other distros care about ease of use, and if that’s why you left Linux it’s likely because you made it hard on yourself.

                                                                                                                                I spent hours configuring Arch and i3 to be just how I needed it, and it still wouldn’t work well. I installed Fedora and everything (even this one ethernet over type-c dongle someone told me simply could not work on Linux) just worked. It took about 5 minutes to setup my keybinds in KDE again.

                                                                                                                                (Also interesting that I moved the other way as you, Mac -> Linux.)

                                                                                                                                1. 4

                                                                                                                                  I had typed here a longer reply, but I’ll just say instead that I have hard time believing some hardware worked on Fedora while not on Arch.

                                                                                                                                  1. 1

                                                                                                                                    Why?

                                                                                                                                    1. 2

                                                                                                                                      Hardware compatibility comes primarily from the kernel, and Arch’s kernel follows vanilla in a quite speedy fashion.

                                                                                                                                      1. 2

                                                                                                                                        Ah, the problem likely wasn’t support, but that I needed some config option somewhere to enable it, and I had no idea what package/service would even handle something like that. A ready to use distro already had that configured.

                                                                                                                                        A person who gladly moved to MacOS would probably appreciate something like that not involving configuration.

                                                                                                                                  2. 3

                                                                                                                                    Nah, I like Arch Linux. Its simplicity makes many of the deficiencies of desktop PCs worth it. Almost.

                                                                                                                                    I think it’s mostly that I felt that Wayland promised to remove most of these obstacles, but we’re still waiting. Then I happened to have a Mac forced on me by work for a longer time, so I was sort-of forced to experience it. It took some time, but I warmed up to it enough that now I find myself liking how things just work.

                                                                                                                                    Fedora and Ubuntu and Windows are not there yet, and I suspect they never will. The problem is the amount of hardware and software combinations they have to support.

                                                                                                                                2. 1

                                                                                                                                  I agree that Linux can be better, but…

                                                                                                                                  Scaling for the types of apps he mentioned (Java, and other weird ones) won’t work on Windows either. Only MacOS got this right.

                                                                                                                                1. 6

                                                                                                                                  Wow if I bought a monitor with a dead pixel and weird lines on the screen it’d be back in the shop before you could say ‘Consumer Guarantees Act 1993’. Especially on such expensive high end hardware. I was upset enough that my monitors’ colour balance isn’t quite the same.

                                                                                                                                  EDIT: I also find it absolutely hilarious that DPI scaling works fine in Qt 5, and works fine in actual web browsers, but doesn’t work in Electron, the supposedly ‘modern’ UI framework.

                                                                                                                                  1. 4

                                                                                                                                    He didn’t even align the displays with each other … AAAAAAARGHRGHGHRGH.

                                                                                                                                    1. 3

                                                                                                                                      DPI scaling works fine for Electron apps based on a Chromium version that supports DPI scaling. This has been the case for quite some time now, and Chromium’s move to GTK3 has improved support even further. I’m not sure what Electron apps the author was using that didn’t support DPI scaling, however I’ve yet to come across one that doesn’t scale on my 4K laptop screen. Both VS Code and Slack work flawlessly for me.

                                                                                                                                      I got my XPS 9560 in early 2017 with a 4K screen so I was initially quite worried about scaling issues, however the only apps I ever have issues with are older GTK2 apps (Gimp, and pgAdmin are the only two that I use).

                                                                                                                                      1. 2

                                                                                                                                        DPI scaling works in Electron apps, but I often have to specify it per app (often by using Ctrl +/- for the browser zoom). … It is kinda a step backwards when you think about it.

                                                                                                                                        1. 1

                                                                                                                                          I am using Spotify. I have just checked and it’s still not scaling correctly without the appropriate command-line option. I’ll add a note this may depend on the Electron app.

                                                                                                                                          EDIT: maybe Spotify is not an Electron app, but a CEF app. Is there still a difference?

                                                                                                                                          1. 1

                                                                                                                                            The version of Chromium CEF/Spotify uses seems to lag pretty far behind contemporary Electron builds, just based on https://www.spotify.com/ro/opensource/

                                                                                                                                            1. 1

                                                                                                                                              Chromium 65 is recent enough to have the appropriate code. But maybe CEF doesn’t make use of it. I’ll update the table to mention Electron apps works fine.

                                                                                                                                              1. 1

                                                                                                                                                Spotify for Linux has been around since before Electron existed, so Spotify not using it isn’t much of a surprise.

                                                                                                                                                According to this page, Electron doesn’t make use of CEF, and instead calls Chromium’s APIs directly, which is probably why Electron apps are able to scale correctly while Spotify doesn’t.

                                                                                                                                            2. 1

                                                                                                                                              I use Spotify every day in a HiDPI environment. Never had an issue. The one thing you might want to do if the first time you load it the text looks too small is use the built in zoom feature (Ctrl+/Ctrl-) to bring the font to a readable size, it’ll be saved and you won’t have to worry about it anymore.

                                                                                                                                          2. 1

                                                                                                                                            Wow if I bought a monitor with a dead pixel and weird lines on the screen it’d be back in the shop

                                                                                                                                            The policy allowing some handful of dead/stuck pixels has been written into the warranties of most monitors literally since LCD computer monitors have been around. Because most people use their monitors for web browsing, email, document editing, etc, where a couple of extremely tiny black specs are truly insignificant and will literally never be noticed among all of the dust such that accumulates on every screen.

                                                                                                                                            If you want a monitor that comes with zero dead pixels guarantee, they certainly sell those, but they cost more as well since there’s more QA involved.

                                                                                                                                            1. 1

                                                                                                                                              The policy allowing some handful of dead/stuck pixels has been written into the warranties of most monitors literally since LCD computer monitors have been around.

                                                                                                                                              They can write whatever they like in the agreement that I never signed or agreed to when I bought a monitor from a shop. It’s completely irrelevant. I’m not talking about returning it to the manufacturer under their warranty, I’m talking about returning it to the shop I bought it from under consumer protection law.

                                                                                                                                              Because most people use their monitors for web browsing, email, document editing, etc, where a couple of extremely tiny black specs are truly insignificant and will literally never be noticed among all of the dust such that accumulates on every screen.

                                                                                                                                              My monitor has no dead pixels. If it got a dead pixel, I would notice immediately. They’re incredibly obvious to anyone that isn’t blind.

                                                                                                                                              If you want a monitor that comes with zero dead pixels guarantee, they certainly sell those, but they cost more as well since there’s more QA involved.

                                                                                                                                              No, monitors that come with a ‘zero dead pixels’ guarantee are all monitors.

                                                                                                                                              1. 1

                                                                                                                                                They can write whatever they like in the agreement that I never signed or agreed to when I bought a monitor from a shop.

                                                                                                                                                Nobody mentioned an agreement. A warranty is not the same as an agreement or contract.

                                                                                                                                                I’m not talking about returning it to the manufacturer under their warranty, I’m talking about returning it to the shop I bought it from under consumer protection law.

                                                                                                                                                It would have been useful to mention that you’re apparently in New Zealand. If I understand it, the law you’re talking about requires every retailer to accept returns of purchased merchandise. Not all countries have such a law. In the U.S. for instance, almost every store accepts returns whether or not the merchandise is defective. But this is simply good customer service, it’s not a legal requirement.

                                                                                                                                                So now the argument hinges on what is considered defective and who gets to decide that. Is it up to the manufacturer? The retailer? The end user? In your country, I honestly don’t know and don’t care enough to research it right now.

                                                                                                                                                They’re incredibly obvious to anyone that isn’t blind.

                                                                                                                                                No, not really. Dead pixels are only obvious when the entire area around the dead pixel is one solid bright color, and even then, are generally indistinguishable from dust. Most people will never notice a dead pixel in everyday use, especially as the pixels in monitors get smaller and smaller. I have a huge monitor with a ridiculous resolution at home. It has a couple of dead pixels, it’s been months since I last noticed them. But by god it was like $200 on Amazon. I’ll happily save a few hundred dollars to deal with a couple of dead pixels I very rarely notice.

                                                                                                                                                No, monitors that come with a ‘zero dead pixels’ guarantee are all monitors.

                                                                                                                                                In New Zealand, maybe, but that’s not at all a universal statement. Nor should it be.

                                                                                                                                                The realities of the LCD manufacturing process are such that if every LCD panel manufacturer threw out all of their panels with one or more dead pixels, every monitor produced would cost the end user a lot more. Because not only do you need better QA, you’re throwing into the trash a significant percentage of your yield. Which has a dual negative impact: Not only did you waste precious factory time and expensive resources on the panel, now it has to get thrown away into a landfill or processed for recycling if that’s even possible.

                                                                                                                                                It’s far more efficient from a manufacturing, environmental, and market standpoint to just sell the slightly imperfect panels at a discount and sell the perfect panels for whatever the market will bear for zero dead pixels. Which is exactly what most manufacturers do. You want zero dead pixels, buy the one with the zero dead pixels policy. Here is Dell’s version of that: https://www.dell.com/support/article/nz/en/nzbsd1/sln130145/dell-lcd-display-pixel-guidelines?lang=en

                                                                                                                                                1. 1

                                                                                                                                                  Nobody mentioned an agreement. A warranty is not the same as an agreement or contract.

                                                                                                                                                  A warranty is an example of an agreement. You purchase the thing, and they agree to take it back if it’s faulty. But they can put whatever terms they like, they can define taking it back, define timelines, define ‘faulty’, etc. It’s completely up to them, really. If you don’t like it, don’t buy it.

                                                                                                                                                  It would have been useful to mention that you’re apparently in New Zealand. If I understand it, the law you’re talking about requires every retailer to accept returns of purchased merchandise.

                                                                                                                                                  Only if it’s faulty.

                                                                                                                                                  So now the argument hinges on what is considered defective and who gets to decide that. Is it up to the manufacturer? The retailer? The end user? In your country, I honestly don’t know and don’t care enough to research it right now.

                                                                                                                                                  The same way anything is decided legally: it starts off a bit fuzzy around the edges, but in the vast majority of cases, it’s pretty obvious what it means for something to be faulty. And in a few edge cases, it gets decided by the legal system which sets a precedent that sharpens the edges for everyone else in the future.

                                                                                                                                                  No, not really. Dead pixels are only obvious when the entire area around the dead pixel is one solid bright color, and even then, are generally indistinguishable from dust. Most people will never notice a dead pixel in everyday use, especially as the pixels in monitors get smaller and smaller.

                                                                                                                                                  I can guarantee I’d notice any dead pixels on my 1920x1200, 24 inch monitor. I can guarantee I’d notice any dead pixels on my phone. I think it’s nonsense to claim that most people would never notice a dead pixel in everyday use. A bright dot in the middle of your monitor is going to be obvious if you’re watching something that’s dark. The moment you watch a movie there’s an unmoving bright green dot in the middle of the screen? Everyone is going to notice that.

                                                                                                                                                  I have a huge monitor with a ridiculous resolution at home. It has a couple of dead pixels, it’s been months since I last noticed them. But by god it was like $200 on Amazon. I’ll happily save a few hundred dollars to deal with a couple of dead pixels I very rarely notice.

                                                                                                                                                  It’s fine if the manufacturers and retailers sell them at a discount as seconds. But that’s not what they’re doing. They’re selling them as normal and then just hoping people can’t be bothered complaining about them and returning them.

                                                                                                                                                  The realities of the LCD manufacturing process are such that if every LCD panel manufacturer threw out all of their panels with one or more dead pixels, every monitor produced would cost the end user a lot more.

                                                                                                                                                  For a start, nobody is saying that they have to throw them away. As I said, they could sell them at a discount as a second. Some people would be fine with that, others wouldn’t, that’s friendly to the customer and lets them make a choice with a tradeoff.

                                                                                                                                                  It’s far more efficient from a manufacturing, environmental, and market standpoint to just sell the slightly imperfect panels at a discount and sell the perfect panels for whatever the market will bear for zero dead pixels. Which is exactly what most manufacturers do.

                                                                                                                                                  That’s absolutely not what they do. They sell them all at a price somewhere between those two prices, and when you buy a monitor you roll the dice. Maybe you’ll be lucky, maybe you won’t. People that want a good monitor that actually works as advertised have to roll the dice, and someone that doesn’t care like yourself has to pay a higher cost (for your chance to get a perfect monitor) than they’d pay if they were able to specifically buy a monitor with a couple of dead pixels at a discount.

                                                                                                                                          1. 1

                                                                                                                                            This starts as a listing of previous hijacks but quickly takes a turn towards General Buck Turgidson in the war room yelling about an ASN gap.

                                                                                                                                            I don’t see any way to achieve the proposed “access reciprocity” without dramatically increasing US regulation of the Internet, which is about the last thing I’d want to see.

                                                                                                                                            1. 2

                                                                                                                                              I don’t see any way to achieve the proposed “access reciprocity” without dramatically increasing US regulation of the Internet, which is about the last thing I’d want to see.

                                                                                                                                              Serious question: you’re more comfortable with long-term route hijacking and subjecting large numbers of people here in North America to NSA-style data slurping by the Chinese government (along with the attendant industrial espionage and occasional blackmail that is its goal) than you are with fairly straightforward legislation around the presence of networking equipment owned by a foreign government’s national telecom on domestic soil?

                                                                                                                                              1. 1

                                                                                                                                                I’m entirely unconvinced this “fairly straightforward legislation” actually solves the problem. So let’s say we push it out, Verizon gets a PoP in China, and China Telecom path prepends AS701’s routes out of use. Then what? We force US networks to retaliate?

                                                                                                                                                That imbalance in access allows for malicious behavior by China through China Telecom at a time and place of its choosing, while denying the same to the US and its allies.

                                                                                                                                                The paper doesn’t beat around the bush with what this is about, so why should we? The US government is angry that it can’t shoot back.

                                                                                                                                                If you want fairly straightforward legislation to prevent BGP hijacking, mandate that all tier one networks implement BGPsec. That’s some regulation I’d like to see - I’d rather see disarmament over brinkmanship.

                                                                                                                                              2. 2

                                                                                                                                                I don’t see any way to achieve the proposed “access reciprocity” without dramatically increasing US regulation of the Internet, which is about the last thing I’d want to see.

                                                                                                                                                Well, the private sector’s incentives so far have been to screw people over maximally, especially ISP’s (examples). As this comment shows, every major advance with wide impact on the U.S. came with government intervention. The Internet itself was a product of government-funded R&D that the private sector was doing the opposite of: digital, toll roads w/ lock-in and tons of limitations. Now, we have the ISP’s snooping on our data and Tier 1-3’s refusing to do much about DDOS attacks facilitates by their inaction.

                                                                                                                                                So, I’m fine with a tiny bit of regulation that’s mostly result-oriented (eg net neutrality) and only prescriptive when strong evidence backs it (eg secure logins for routers vs Telnet). I pointed out here in last paragraph that ISP’s could mitigate DDOS’s via regulations prescribing a few, simple things mostly using existing hardware. The big companies waste tens of millions or more on useless bullshit but can’t afford tens to hundreds of thousands to secure their modems and routers. (rolls eyes)

                                                                                                                                                In this longer comment, I pointed out regulation worked before for boosting INFOSEC of systems/networks and is currently boosting safety/correctness/predictability in regulated, software markets (esp aerospace and rail). As Bell pointed out, the private market rarely made anything secure on their own since it’s more profitable to minimize quality of service and/or charge people for support/upgrades of shitty products. Pretty much the entire software market is doing that. That argues convincingly against trusting private sector to do it with or without demand. That leaves government. And remember I had qualifications like results-oriented, minimal prescriptions, and so on. We don’t want another million dollar pile of paperwork and hand-waiving producing insecure, certified systems like Common Criteria turned into for vast majority of certifications.

                                                                                                                                              1. 3

                                                                                                                                                Hopefully we’re not bored by rust is not other languages stories yet. Not sure if I learned more about rust or ruby here.

                                                                                                                                                Rust is telling me that iter() yielded references to integers, but my code expected an actual integer, not a reference to an integer.

                                                                                                                                                Now the array was mutated! It turns out Ruby passed integers to the closure by value, but strings by reference. Updating each string inside the loop also updated that string inside the array.

                                                                                                                                                I managed to be surprised by both those statements.

                                                                                                                                                1. 1

                                                                                                                                                  ruby passes everything by reference; the issue in the code is that i = i + 1 rebinds i as a reference to a new integer object. the other half of the puzzle is that integers are immutable, and strings are not. the writer managed to confuse the issue by saying

                                                                                                                                                  str = str << "-mutated"
                                                                                                                                                  

                                                                                                                                                  instead of

                                                                                                                                                  str = str + "-mutated"
                                                                                                                                                  

                                                                                                                                                  in both cases, str is rebound to the result of the RHS, but the + operator adds two strings and returns a new string, whereas the << operator modifies a string object in place and returns the same object.

                                                                                                                                                  1. 1

                                                                                                                                                    Ok, using << seems pretty sneaky here. Thanks for pointing that out.

                                                                                                                                                  2. 1

                                                                                                                                                    Have you ever changed the value of 4?

                                                                                                                                                    In a language other than FORTRAN?

                                                                                                                                                    (In a more modern take on that old story, I have done it in Python…)

                                                                                                                                                    1. 3

                                                                                                                                                      Yes. To great amusement I have played around with reflection in Java and changed the values of signed bytes.

                                                                                                                                                      For those who haven’t played around with it previously, the JVM has an internal array that caches all signed byte values1.

                                                                                                                                                      While both private, internal and static it’s still possible to access the field using reflection.

                                                                                                                                                      If you swap references in that array, sudden 5 becomes 4.

                                                                                                                                                      1. 1

                                                                                                                                                        I’ve done the “redefining the value of true” gag in squeak smalltalk. It ends about as poorly as you’d expect.

                                                                                                                                                    1. 36

                                                                                                                                                      When reading this article I wanted to echo the same thing that Daniel Steinberg basically said.

                                                                                                                                                      DoH is necessary because the DNS community messed up over the past two decades. Instead of privacy, hop-to-hop authentication and integrity, they picked end2end integrity protection but no privacy (DNSSEC) and wasted two decades on heaping that unbelievable amount of complexity on top of DNS. Meanwhile cleanup of basic protocol issues with DNS crawled along at glacial pace.

                                                                                                                                                      This is why DNSSEC will never get browser support, but DoH is getting there rapidly. It solves the right problems.

                                                                                                                                                      1. 4

                                                                                                                                                        I haven’t studied DoH or DoT enough to feel comfortable talking about the solutions, but on the requirements side, intuitively I don’t get where this all-consuming “privacy” boundary is supposed to be. Is the next step that all browsers will just ship with mandatory VPNs so nobody can see what IP address I’m talking to? (Based on history, that wouldn’t really surprise me.) So then there’s a massive invisible overlay network just for the WWW?

                                                                                                                                                        And by “nobody” I mean nobody who doesn’t really matter anyway, since I’d think no corporation with an extensive network, nor any country with extensive human rights problems, is going to let you use either protocol anyway (or they’ll require a MITM CA).

                                                                                                                                                        1. 5

                                                                                                                                                          The end game is all traffic is protected by a sort of ad hoc point to point VPN between endpoints. There can be traffic analysis but no content analysis.

                                                                                                                                                          1. 6

                                                                                                                                                            We’re slowly moving towards “Tor”. It seems all privacy enhancements being implemented slowly build up to something that Tor already provides for a long time…

                                                                                                                                                            1. 4

                                                                                                                                                              “Tor all the things” would be awesome.. if it could be fast

                                                                                                                                                              1. 2

                                                                                                                                                                Or what dnscurve did years ago.

                                                                                                                                                              2. 1

                                                                                                                                                                But the point of this seems to be making the “endpoint” private as well. The line between “traffic” and “content” is ever blurrier — I wouldn’t have thought DNS is “content”. If it is, then I don’t know why IP addresses aren’t “content” just as much. Is this only supposed to improve privacy for shared servers?

                                                                                                                                                                1. 8

                                                                                                                                                                  I’ve never thought of the content of DNS packets as anything other than content. Every packet has a header containing addresses and some data. The data should be encrypted.

                                                                                                                                                                  1. 1

                                                                                                                                                                    I don’t think the argument is that simple. ICMP and ARP packets are also headers and data, but that data surely isn’t “content”. I would have made your statement just about application UDP and TCP.

                                                                                                                                                                    I think of “content” as what applications exchange, and “traffic” (aka “metadata”) as what the network that connects applications needs to exchange to get them connected. Given that both DNS names and IP addresses identify endpoints, it’s not obvious to me why DNS names are more sensitive than IP addresses. The end result of a DNS lookup is that you immediately send a packet to the resulting IP address, which quite often identifies who you’re talking to just as clearly as the DNS name.

                                                                                                                                                                    No doubt I’m just uneducated on this — my point was I don’t understand where that line is being drawn. When I try to follow this line of reasoning I end up needing a complete layer-3 VPN (so you can’t even see the IP addresses), not just some revisions to the DNS protocol.

                                                                                                                                                                    1. 2

                                                                                                                                                                      The end result of a DNS lookup is that you immediately send a packet to the resulting IP address

                                                                                                                                                                      This is a very limited view of DNS.

                                                                                                                                                                      1. 1

                                                                                                                                                                        Is there another usage of DNS that’s relevant to this privacy discussion that’s going on?

                                                                                                                                                                        1. 3

                                                                                                                                                                          Most browsers do DNS prefetching, which reveals page content even for links you don’t visit.

                                                                                                                                                                          1. 1

                                                                                                                                                                            Good point! It makes me think that perhaps we should make browsers continually prefetch random websites that the users don’t visit, which would improve privacy in much the same way as the CDNs do. (Actually, I feel like that has been proposed, though I can’t find a reference.)

                                                                                                                                                                            iTerm had a bug in which it was making DNS requests for bits of terminal output to see if they were links it should highlight. So sometimes content does leak into DNS — by either definition.

                                                                                                                                                                            1. 1
                                                                                                                                                                          2. 1

                                                                                                                                                                            CNAME records, quite obviously, for one

                                                                                                                                                                            1. 1

                                                                                                                                                                              OK, obviously, but then is there something relevant to privacy that you do with CNAME records, other than simply looking up the corresponding A record and then immediately going to that IP address?

                                                                                                                                                                              If the argument is “ah, but the A address is for a CDN”, that thread is below…I only get “privacy” if I use a CDN of sufficient size to obscure my endpoint?

                                                                                                                                                                              1. 3

                                                                                                                                                                                OK, obviously, but then is there something relevant to privacy that you do with CNAME records, other than simply looking up the corresponding A record and then immediately going to that IP address

                                                                                                                                                                                I resolve some-controversial-site-in-my-country.com to CNAME blah.squarespace.com. I resolve that to A {some squarespace IP}

                                                                                                                                                                                Without DoH or equiv, its obvious to a network observer who I’m talking to. With it, it is impossible to distinguish it from thousands of other sites.

                                                                                                                                                                                If the argument is “ah, but the A address is for a CDN”, that thread is below…I only get “privacy” if I use a CDN of sufficient size to obscure my endpoint?

                                                                                                                                                                                Yes, this doesn’t fix every single privacy issue. No, that doesn’t mean it doesn’t improve the situation for a lot of things.

                                                                                                                                                                    2. 5

                                                                                                                                                                      IP addresses are content when they are A records to your-strange-porno-site.cx or bombmaking-101.su.

                                                                                                                                                                      They are metadata when they redirect to *.cloudfront.net, akamiedge.net, cdn.cloudflare.com, …, and huge swaths of the Internet are behind giant CDNs. Widespread DoH and ESNI adoption will basically mean that anyone between you and that CDN will be essentially blind to what you are accessing.

                                                                                                                                                                      Is this better? That’s for you to decide ;)

                                                                                                                                                                      1. 6

                                                                                                                                                                        Well, here again I don’t quite get the requirements. I’m not sure it’s a good goal to achieve “privacy” by routing everything through three giant commercial CDNs.

                                                                                                                                                                        1. 3

                                                                                                                                                                          Because three CDNs are literally the only uses of Virtual Hosting and SNI on the entire internet?

                                                                                                                                                                          I’d venture to say that the overwhelming majority of non-corporate, user generated content (and a large amount of smaller business sites) are not hosted at a dedicated IP. It’s all shopify equivalents and hundreds of blog and CMS hosting services.

                                                                                                                                                                          1. 1

                                                                                                                                                                            Well, the smaller the host is, the weaker the “security” becomes.

                                                                                                                                                                            Anyway, I was just trying to understand the requirements behind this protocol, not make a value judgment. Seems like the goal is increased obscurity for a large, but undefined and unstable, set of websites.

                                                                                                                                                                            If I were afraid of my website access being discovered, I personally wouldn’t rely on this mechanism for my security, without some other mechanism to guarantee the quantity and irrelevance of other websites on the same host/proxy. But others might find it useful. It seems to me like an inelegant hack that is partially effective, and I agree it’s disappointing if this is the best practical solution the internet engineering community has come up with.

                                                                                                                                                                            1. 2

                                                                                                                                                                              I have multiple subdomins on a fairly small site. Some of them are less public than others, so it would be nice to not reveal their presence.

                                                                                                                                                              1. 2

                                                                                                                                                                On one hand: I agree that DNS-over-HTTPS is a silly and convoluted solution.

                                                                                                                                                                On the other hand: DNS-over-TLS is a bad solution for the reason pointed out: it lives on its own port.

                                                                                                                                                                Question: Why do we need ports any more at all? It seems like if we didn’t have dedicated port numbers, but instead referred to resources by subdomain or subdirectory beneath the main hostname, then all traffic would be indiscriminate when secured by TLS.

                                                                                                                                                                1. 4

                                                                                                                                                                  Could it have been possible for DNS-over-TLS to use 443 and make the server able to route DNS and HTTP request appropriately? I’m not very knowledgable of TLS. From what I understand its just a transport layer so a server could simply read the beginning of an incoming message and easily detect if it is an HTTP or DNS header?

                                                                                                                                                                  1. 9

                                                                                                                                                                    Yes, much like http2 works. It complicates the TLS connection because now it passes a hint about the service it wants, but that bridge is already crossed.

                                                                                                                                                                  2. 4

                                                                                                                                                                    IP addresses allow two arbitrary computers to exchange information [1], whereas ports allow to arbitrary programs (or processes) to exchange information. Also, it’s TCP and UDP that have ports. There are other protocols that ride on top of IP (not that anyone cares anymore).

                                                                                                                                                                    [1] Well, in theory anyway, NAT breaks that to some degree.

                                                                                                                                                                    1. 3

                                                                                                                                                                      Ports are kinda central to packet routing, if my understanding is correct, as it has been deployed.

                                                                                                                                                                      1. 5

                                                                                                                                                                        You need the concept of ports to route packets to the appropriate process, certainly. However, with DNS SRV records, you don’t need globally-agreed-upon port assignments (a la “HTTP goes to port 80”). You could assign arbitrary ports to services and direct clients accordingly with SRV.

                                                                                                                                                                        Support for this is very incomplete (e.g. browsers go to port 80/443 on the A/AAAA record for a domain rather than querying for SRVs), but the infrastructure is in place.

                                                                                                                                                                        1. 5

                                                                                                                                                                          On what port do I send the DNS query for the SRV record of my DNS server?

                                                                                                                                                                          1. 1

                                                                                                                                                                            Obviously, you look up an SRV record to determine which port DNS is served over. ;)

                                                                                                                                                                            I don’t know if anyone has thought about the bootstrapping problem. In theory, you could deal with it the same way you already bootstrap your DNS (DHCP or including the port with the IP address in static configurations), but I don’t know if this is actually possible.

                                                                                                                                                                          2. 2

                                                                                                                                                                            You need the concept of ports to route packets to the appropriate process

                                                                                                                                                                            Unless we assign an IP address to every web facing process.

                                                                                                                                                                        2. 1

                                                                                                                                                                          Problem: both solutions to private DNS queries have downsides related to the DNS protocol fundamentally having failed to envision a need for privacy

                                                                                                                                                                          Solution: radically overhaul the transport layer by replacing both TCP and UDP with something portless?

                                                                                                                                                                          The suggested cure is worse than the disease, in this case, in terms of sheer amount of work, and completely replaced hardware and software, it would require .

                                                                                                                                                                          1. 2

                                                                                                                                                                            I don’t think DNS is the right place to do privacy. If I’m on someone’s network, he can see what IP addresses I’m talking to. I can hide my DNS traffic, but he still gets to see the IP addresses I ultimately end up contacting.

                                                                                                                                                                            Trying to add privacy at the DNS stage is doing it at the wrong layer. If I want privacy, I need it at the IP layer.

                                                                                                                                                                            1. 4

                                                                                                                                                                              Assuming that looking up an A record and making a connection to that IP is the only thing DNS is used for.

                                                                                                                                                                              1. 3

                                                                                                                                                                                Think of CDN or “big websites” traffic. If you hit Google, Amazon, Cloudflare datacenters, nobody will be able to tell if you were reaching google.com, amazon.com, cloudflare.com or any of their costumers.

                                                                                                                                                                                Currently, this is leaking through SNI and DNS. DoH and Ecrypted SNI (ESNI) will improve on the status quo.

                                                                                                                                                                                1. 2

                                                                                                                                                                                  And totally screws small sites. Or is the end game centralization of all web sites to a few hosts to “protect” the privacy of users?

                                                                                                                                                                                  1. 2

                                                                                                                                                                                    You can also self-host more than one domain on your site. In fact, I do too. It’s just a smaller set :-)

                                                                                                                                                                                    1. 1

                                                                                                                                                                                      End game would be VPNs or Tor.

                                                                                                                                                                                    2. 2

                                                                                                                                                                                      Is that really true? I though request/response metadata and timing analysis coud tell them who we were connecting to.

                                                                                                                                                                                      1. 2

                                                                                                                                                                                        Depends who they are. I’m not going to do a full traffic dump, then try to correlate packet timings to discover whether you were loading gmail or facebook. But tcpdump port 53 is something I’ve actually done to discover what’s talking to where.

                                                                                                                                                                                        1. 1

                                                                                                                                                                                          True. maybe ESNI and DoH are only increasing the required work. Needs more research?

                                                                                                                                                                                          1. 1

                                                                                                                                                                                            Probably to be on safe side. Id run it by experts in correlation analyses on network traffic. They might already have something for it.

                                                                                                                                                                                        2. 2

                                                                                                                                                                                          nobody will be able to tell if you were reaching google.com, amazon.com, cloudflare.com or any of their costumers.

                                                                                                                                                                                          except for GOOGL, AMZN, et al. which will happily give away your data, without even flinching a bit.

                                                                                                                                                                                          1. 1

                                                                                                                                                                                            Yeah, depends on who you want to exclude from snooping on your traffic. The ISP, I assumed. The Googles and Amazons of the world have your data regardless of DNS/DoH.

                                                                                                                                                                                            I acknowledge that the circumstances are different in every country, but in the US, the major ISPs actually own ad networks and thus have a strong incentive not to ever encrypt DNS traffic.

                                                                                                                                                                                            1. 1

                                                                                                                                                                                              Yeah, depends on who you want to exclude from snooping on your traffic. The ISP, I assumed. The Googles and Amazons of the world have your data regardless of DNS/DoH.

                                                                                                                                                                                              so i’m supposed to just give them full access over the remaining part which isn’t served by them?

                                                                                                                                                                                              I acknowledge that the circumstances are different in every country, but in the US, the major ISPs actually own ad networks and thus have a strong incentive not to ever encrypt DNS traffic.

                                                                                                                                                                                              ISPs in the rest of the world aren’t better, but this still isn’t a reason to shoehorn DNS into HTTP.

                                                                                                                                                                                              1. 1

                                                                                                                                                                                                No, you’re misreading the first bit. You’re already giving iit to them, most likely, because of all those cloud customers. This makes their main web property indistinguishable from their clients, once SNI and DNS is encrypted.

                                                                                                                                                                                                No need to give more than before.

                                                                                                                                                                                                1. 1

                                                                                                                                                                                                  You’re already giving iit to them, most likely, because of all those cloud customers.

                                                                                                                                                                                                  this is a faux reason. i try to not use these things when possible. just because many things are there, it doesn’t mean that i have to use even more stuff of them, quite the opposite. this may be an inconvenience for me, but it is one i’m willing to take.

                                                                                                                                                                                                  This makes their main web property indistinguishable from their clients, once SNI and DNS is encrypted.

                                                                                                                                                                                                  indistinguishable for everybody on the way, but not for the big ad companies on whose systems things are. those are what i’m worried about.

                                                                                                                                                                                                  1. 1

                                                                                                                                                                                                    Hm I feel were going in circles here.

                                                                                                                                                                                                    For those people who do use those services, there is an immediate gain in terms of hostname privacy (towards their ISP), once DoH and ESNI are shipped.

                                                                                                                                                                                                    That’s all I’m saying. I’m not implying you do or you should.

                                                                                                                                                                                                    1. 1

                                                                                                                                                                                                      I’m not implying you do or you should.

                                                                                                                                                                                                      no, but the implications of DoH are that i’ll end up using it, even if i don’t want to. it’ll be baked into the browsers, from there it’s only a small step to mandatory usage in systemd. regarding DoH in general: if you only have http, everything looks like a nail.

                                                                                                                                                                                      2. 1

                                                                                                                                                                                        Alternative solution: don’t use DNS anymore.

                                                                                                                                                                                        Still lots of work since we need to ditch HTTP, HTTPS, FTP, and a host of other host-oriented protocols. But, for many of these, we’ve got well-supported alternatives already. The question of how to slightly improve a horribly-flawed system stuck in a set of political deadlocks becomes totally obviated.

                                                                                                                                                                                        1. 3

                                                                                                                                                                                          That’s the biggest change of all of them. The whole reason for using DoH is to have a small change, that improves things, and that doesn’t require literally replacing the entire web.

                                                                                                                                                                                          1. 1

                                                                                                                                                                                            Sure, but it’s sort of a waste of time to try to preserve the web. The biggest problem with DNS is that most of the time the actual hostname is totally irrelevant to our purposes & we only care about it because the application-layer protocol we’re using was poorly designed.

                                                                                                                                                                                            We’re going to need to fix that eventually so why not do it now, ipv6-style (i.e., make a parallel set of protocols that actually do the right thing & hang out there for a couple decades while the people using the old ones slowly run out of incremental fixes and start to notice the dead end they’re heading toward).

                                                                                                                                                                                            Myopic folks aren’t going to adopt large-scale improvments until they have no other choice, but as soon as they have no other choice they’re quick to adopt an existing solution. We’re better off already having made one they can adopt, because if we let them design their own it’s not going to last any longer than the last one.

                                                                                                                                                                                            DNS is baked into everything, despite being a clearly bad idea, because it was well-established. Well, IPFS is well-established now, so we can start using it for new projects and treating DNS as legacy for everything that’s not basically ssh.

                                                                                                                                                                                            1. 8

                                                                                                                                                                                              Well, IPFS is well-established now

                                                                                                                                                                                              No it’s not. Even by computer standards, IPFS is still a baby.

                                                                                                                                                                                              Skype was probably the most well-established P2P application in the world before they switched to being a reskinned MSN Messenger, and the Skype P2P network had disasters just like centralized services have, caused by netsplits, client bugs, and introduction point issues. BitTorrent probably holds the crown for most well-established P2P network now, and since it’s shared-nothing (the DHT isn’t, but BitTorrent can operate without it), has never had network-wide disasters. IPFS relies on the DHT, so it’s more like Skype than BitTorrent for reliability.

                                                                                                                                                                                              1. 0

                                                                                                                                                                                                It’s only ten years old, sure. I haven’t seen any reliability problems with it. Have you?

                                                                                                                                                                                                DHT tech, on top of being an actually appropriate solution to the problem of addressing static chunks of data (one that eliminates whole classes of attacks by its very nature), is more reliable now than DNS is. And, we have plenty of implementations and protocols to choose from.

                                                                                                                                                                                                Dropping IPFS or some other DHT into an existing system (like a browser) is straightforward. Opera did it years ago. Beaker does it now. There are pure-javascript implementations of DAT and IPFS for folks who can’t integrate it into their browser.

                                                                                                                                                                                                Skype isn’t a good comparison to a DHT, because Skype connects a pair of dynamic streams together. In other words, it can’t take advantage of redundant caching, so being P2P doesn’t really do it any favors aside from eliminating a single point of failure from the initial negotiation steps.

                                                                                                                                                                                                For transferring documents (or scripts, or blobs, or whatever), dynamicism is a bug – and one we eliminate with named data. Static data is the norm for most of what we use the web for, and should be the norm for substantially more of it. We can trivially eliminate hostnames from all asset fetches, replace database blobs with similar asset fetches, use one-time pads for keeping secret resources secret while allowing anyone to fetch them, & start to look at ways of making services portable between machines. (I hear DAT has a solution to this last one.) All of this is stuff any random front-end developer can figure out without much nudging, because the hard work has been done & open sourced already.

                                                                                                                                                                                                1. 4

                                                                                                                                                                                                  IPFS is not ten years old. Its initial commit is five years ago, and that was the start of the paper, not the implementation.

                                                                                                                                                                                                  1. 1

                                                                                                                                                                                                    Huh. I could have sworn it was presented back in 2010. I must be getting it confused with another DHT system.

                                                                                                                                                                                              2. 7

                                                                                                                                                                                                Sure, but it’s sort of a waste of time to try to preserve the web.

                                                                                                                                                                                                This is letting Perfect be the enemy of Good thinking. We can incrementally improve (imperfectly, true) privacy now. Throwing out everything and starting over with a completely new set of protocols is a multi-decade effort before we start seeing the benefits. We should improve the situation we’re in, not ignore it while fantasizing about being in some other situation that won’t arrive for many years.

                                                                                                                                                                                                The biggest problem with DNS is that most of the time the actual hostname is totally irrelevant to our purposes & we only care about it because the application-layer protocol we’re using was poorly designed.

                                                                                                                                                                                                This hasn’t been true since Virtual Hosting and SNI became a thing. DNS contains (and leaks) information about exactly who we’re talking to that an IP address doesn’t.

                                                                                                                                                                                                1. 2

                                                                                                                                                                                                  This is letting Perfect be the enemy of Good thinking. We can incrementally improve (imperfectly, true) privacy now.

                                                                                                                                                                                                  We can also take advantage of low-hanging fruit that circumvent the tarpit that is incremental improvements to DNS now.

                                                                                                                                                                                                  The perfect isn’t the enemy of the good here. This is merely a matter of what looks like a good idea on a six month timeline versus what looks like a good idea on a two year timeline. And, we can guarantee that folks will work on incremental improvements to DNS endlessly, even if we are not those folks.

                                                                                                                                                                                                  Throwing out everything and starting over with a completely new set of protocols is a multi-decade effort before we start seeing the benefits.

                                                                                                                                                                                                  Luckily, it’s an effort that started almost two decades ago, & we’re ready to reap the benefits of it.

                                                                                                                                                                                                  DNS contains (and leaks) information about exactly who we’re talking to that an IP address doesn’t.

                                                                                                                                                                                                  That’s not a reason to keep it.

                                                                                                                                                                                                  Permanently associating any kind of host information (be it hostname or DNS name or IP) with a chunk of data & exposing that association to the user is a mistake. It’s an entanglement of distinct concerns based on false assumptions about DNS permanence, and it makes the whole domain name & data center rent-seeking complex inevitable. The fact that DNS is insecure is among its lesser problems; it should not have been relied upon in the first place.

                                                                                                                                                                                                  The faster we make it irrelevant the better, & this can be done incrementally and from the application layer.

                                                                                                                                                                                                2. 2

                                                                                                                                                                                                  But why would IPFS solve it?

                                                                                                                                                                                                  Replacing every hostname with a hash doesn’t seem very user-friendly to me and last I checked, you can trivially sniff out what content someone is loading by inspecting the requested hashes from the network.

                                                                                                                                                                                                  IPFS isn’t mature either, it’s not even a decade old and most middleboxes will start blocking it once people start using it for illegitimate purposes. There is no plan to circumvent blocking by middleboxes, not even after that stunt with putting wikipedia on IPFS.

                                                                                                                                                                                                  1. 1

                                                                                                                                                                                                    IPFS doesn’t replace hostnames with hashes.It uses hashes as host-agnostic document addresses.

                                                                                                                                                                                                    Identifying hosts is not directly relevant to grabbing documents, and so baking hostnames into document addresses mixes two levels of abstractions, with undesirable side effects (like dependence upon DNS and server farms to provide absurd uptime guarantees).

                                                                                                                                                                                                    IPFS is one example of distributed permanent addressing. There are a lot of implementations – most relying upon hashes, since hashes provide a convenient mechanism for producing practically-unique addresses without collusion, but some using other mechanisms.

                                                                                                                                                                                                    The point is that once you have permanent addresses for static documents, all clients can become servers & you start getting situations where accidentally slashdotting a site is impossible because the more people try to access it the more redundancy there is in its hosting. You remove some of the hairiest problems with caching, because while you can flush things out of a cache, the copy in cache is never invalidated by changes, because the object at a particular permanent address is immutable.

                                                                                                                                                                                                    Problems (particularly with web-tech) that smart folks have been trying to solve with elaborate hacks for decades become trivial when we make addresses permanent, because complications like DNS become irrelevant.

                                                                                                                                                                                                    1. 1

                                                                                                                                                                                                      And other problems become hard like “how do I have my content still online in 20 years?”.

                                                                                                                                                                                                      IPFS doesn’t address the issues it should be addressing, using hashes everywhere being one of them making it particularly user-unfriendly (possibly even user-hostile).

                                                                                                                                                                                                      IPFS doesn’t act like a proper cache either (unless their eviction strategy has significantly evolved to be more cooperative) in addition to leaking data everywhere.

                                                                                                                                                                                                      Torrent and dat:// solve the problem much better and don’t over-advertise their capabilities.

                                                                                                                                                                                                      Nobody really needs permanent addressing, what they really need is either a Tor onion address or actually cashing out for a proper webserver (where IPFS also won’t help if your content is dynamic, it’ll make things only more javascript heavy than they already are).

                                                                                                                                                                                                      1. 1

                                                                                                                                                                                                        how do I have my content still online in 20 years?

                                                                                                                                                                                                        If you want to guarantee persistence of content over long periods, you will need to continue to host it (or have it hosted on your behalf), just as you would with host-based addressing. The difference is that your host machine can be puny because it’s no longer a single point of failure under traffic load: as requests increase linearly, the likelihood of any request being directed to your host decreases geometrically (with a slow decay via cache eviction).

                                                                                                                                                                                                        IPFS doesn’t address the issues it should be addressing, using hashes everywhere being one of them making it particularly user-unfriendly (possibly even user-hostile).

                                                                                                                                                                                                        I would absolutely support a pet-name system on top of IPFS. Hashes are convenient for a number of reasons, but IPFS is only one example of a relatively-mature named-data-oriented solution to permanent addressing. It’s minimal & has good support for putting new policies on top of it, so integrating it into applications that have their own caching and name policies is convenient.

                                                                                                                                                                                                        IPFS doesn’t act like a proper cache either (unless their eviction strategy has significantly evolved to be more cooperative) in addition to leaking data everywhere.

                                                                                                                                                                                                        Most caches have forced eviction based on mutability. Mutability is not a feature of systems that use permanent addressing. That said, I would like to see IPFS clients outfitted with a replication system that forces peers to cache copies of a hash when it is being automatically flushed if an insufficient number of peers already have it (in order to address problem #1) as well as a store-and-forward mode (likewise).

                                                                                                                                                                                                        Torrent and dat:// solve the problem much better and don’t over-advertise their capabilities.

                                                                                                                                                                                                        Torrent has unfortunately already become a popular target for blocking. I would personally welcome sharing caches over DHT by default over heavy adoption of IPFS since it requires less additional work to solve certain technical problems (or, better yet, DHT sharing of IPFS pinned items – we get permanent addresses and seed/leech metrics), but for political reasons that ship has probably sailed. DAT seems not to solve the permanent address problem at all, although it at least decentralizes services; I haven’t looked too deeply into it, but it could be viable.

                                                                                                                                                                                                        Nobody really needs permanent addressing,

                                                                                                                                                                                                        Early web standards assume but do not enforce that addresses are permanent. Every 404 is a fundamental violation of the promise of hypertext. The fact that we can’t depend upon addresses to be truly permanent has made the absurd superstructure of web tech inevitable – and it’s unnecessary.

                                                                                                                                                                                                        what they really need is either a Tor onion address

                                                                                                                                                                                                        An onion address just hides traffic. It doesn’t address the single point of failure in terms of a single set of hosts.

                                                                                                                                                                                                        or actually cashing out for a proper webserver

                                                                                                                                                                                                        A proper web server, though relatively cheap, is more expensive and requires more technical skill to run than is necessary or desirable. It also represents a chain of single points of failure: a domain can be siezed (by a state or by anybody who can social-engineer GoDaddy or perform DNS poisoning attacks), while a host will go down under high load (or have its contents changed if somebody gets write access to the disk). Permanent addresses solve the availability problem in the case of load or active threat, while hash-based permanent addresses solve the correctness problem.

                                                                                                                                                                                                        where IPFS also won’t help if your content is dynamic,

                                                                                                                                                                                                        Truly dynamic content is relatively rare (hence the popularity of cloudflare and akamai), and even less dynamic content actually needs to be dynamic. We ought to minimize it for the same reasons we minimize mutability in functional-style code. Mutability creates all manner of complications that make certain kinds of desirable guarantees difficult or impossible.

                                                                                                                                                                                                        Signature chains provide a convenient way of adding simulated mutability to immutable objects (sort of like how monads do) in a distributed way. A more radical way of handling mutability – one that would require more infrastructure on top of IPFS but would probably be amenable to use with other protocols – is to support append-only streams & construct objects from slices of that append-only stream (what was called a ‘permascroll’ in Xanadu from 2006-2014). This stuff would need to be implemented, but it would not need to be invented – and inventing is the hard part.

                                                                                                                                                                                                        it’ll make things only more javascript heavy than they already are

                                                                                                                                                                                                        Only if we stick to web tech, and then only if we don’t think carefully and clearly about how best to design these systems. (Unfortunately, endemic lack of forethought is really the underlying problem here, rather than any particular technology. It’s possible to use even complete trash in a sensible and productive way.)

                                                                                                                                                                                                        1. 1

                                                                                                                                                                                                          The difference is that your host machine can be puny because it’s no longer a single point of failure under traffic load: as requests increase linearly, the likelihood of any request being directed to your host decreases geometrically (with a slow decay via cache eviction).

                                                                                                                                                                                                          I don’t think this is a problem that needs addressing. Static content like the type that IPFS serves can be cheaply served to a lot of customers without needing a fancy CDN. An RPi on a home connection should be able to handle 4 million visitors a month easily with purely static content.

                                                                                                                                                                                                          Dynamic content, ie the content that needs bigger nodes, isn’t compatible with IPFS to begin with.

                                                                                                                                                                                                          Most caches have forced eviction based on mutability

                                                                                                                                                                                                          Caches also evict based on a number of different strategies that have nothing to do with mutability though, IPFS’ strategy for loading content (FIFO last I checked) behaves poorly with most internet browsing behaviour.

                                                                                                                                                                                                          DAT seems not to solve the permanent address problem at all, although it at least decentralizes services; I haven’t looked too deeply into it, but it could be viable.

                                                                                                                                                                                                          The public key of a DAT share is essentially like a IPFS target with the added bonus of having at tracked and replicated history and mutability, offering everything an IPNS or IPFS hash does. Additionally it’s more private and doesn’t try to sell itself as censorship resistant (just look at the stunt with putting Wikipedia on IPFS)

                                                                                                                                                                                                          Every 404 is a fundamental violation of the promise of hypertext.

                                                                                                                                                                                                          I would disagree with that. It’s more important that we archive valuable content (ie, archive.org or via the ArchiveTeam, etc.) than having a permanent addressing method.

                                                                                                                                                                                                          Additionally the permanent addressing still does not solve content being offline. Once it’s lost, it’s lost and no amount of throwing blockchain, hashes and P2P at it will ever solve this.

                                                                                                                                                                                                          You cannot stop a 404 from happening.

                                                                                                                                                                                                          The hash might be the same but for 99.999% of content on the internet, it’ll be lost within the decade regardless.

                                                                                                                                                                                                          Truly dynamic content is relatively rare

                                                                                                                                                                                                          I would also disagree with that, in the modern internet, mutable and dynamic content are becoming more common as people become more connected.

                                                                                                                                                                                                          CF and Ak allow hosters to cache pages that are mostly static like the reddit frontpage as well as reducing the need for georeplicated servers and reducing the load on the existing servers.

                                                                                                                                                                                                          is to support append-only streams & construct objects from slices of that append-only stream

                                                                                                                                                                                                          See DAT, that’s what it does. It’s an append-only log of changes. You can go back and look at previous versions of the DAT URL provided that all the chunks are available in the P2P network.

                                                                                                                                                                                                          Only if we stick to web tech, and then only if we don’t think carefully and clearly about how best to design these systems.

                                                                                                                                                                                                          IPFS in it’s current form is largely provided as a Node.js library, with bindings to some other languages. It’s being heavily marketed for browsers. The amount of JS in websites would only increase with IPFS and likely slow everything down even further until it scales up to global, or as it promises, interplanetary scale (though interplanetary is a pipedream, the protocol can’t even handle satellite internet properly)

                                                                                                                                                                                                          Instead of looking at pipedreams of cryptography for the solution, we ought to improve the infrastructure and reduce the amount of CPU needed for dynamic content, this is a more easy and viable option than switching the entire internet to a protocol that forgets data if it doesn’t remember it often enough.

                                                                                                                                                                                        1. 2

                                                                                                                                                                                          Sounds a bit weird. I don’t know anything about whether they actually cancelled their “10nm” node or not, but either way, don’t they basically have to move to a newer, smaller node level at some point? If they did cancel 10nm, wouldn’t they not officially cancel it until they had a replacement plan for how to move their tech forward?

                                                                                                                                                                                          1. 2

                                                                                                                                                                                            Sounds a bit weird. I don’t know anything about whether they actually cancelled their “10nm” node or not, but either way, don’t they basically have to move to a newer, smaller node level at some point? If they did cancel 10nm, wouldn’t they not officially cancel it until they had a replacement plan for how to move their tech forward?

                                                                                                                                                                                            TSMC has already launched their “7nm” process (which is, confusingly, physically close to Intel’s 10nm process). Intel probably sees no value in being a year+ late to 10nm parity (if their broken, repeatedly delayed 10nm process is even realistically on track for that).

                                                                                                                                                                                            They don’t need to wait until they have a replacement plan – true-7nm has been the next node in their roadmap for ages, and R&D was well underway for it several years ago. If they’re cancelling the current 10nm project it’s likely just to skip the node entirely and focus additional resources on ensuring a smooth and successful launch of their true-7nm effort.

                                                                                                                                                                                          1. 22

                                                                                                                                                                                            Why do we say kleenex instead of facial tissue? Branding matters a lot, and Mastodon has done a much better job of branding than the fediverse as a whole (which is largely down to, Mastodon is an entity that can brand, whereas “the fediverse” is nothing but a notional collection of OStatus/ActivityPub participators with no central branding arm).

                                                                                                                                                                                            Besides, most non-technical users (who we’re increasingly seeing on Mastodon/The Fediverse, as opposed to the highly technical early adopters) are in the market for services, not protocols. Users never talked about being “on the XMPP federation”, it was always “here’s my AIM”, “here’s my gchat” etc. They still understood that these things worked together, but they weren’t interested in pedantic distinctions between hosting software and protocols, nor should they be.

                                                                                                                                                                                            1. 6

                                                                                                                                                                                              Users never talked about being “on the XMPP federation”, it was always “here’s my AIM”, “here’s my gchat” etc.

                                                                                                                                                                                              The difference is that XMPP was primarily about having one kind of conversation, and the fediverse isn’t.

                                                                                                                                                                                              The main use of the fediverse right now is for twitter-style interaction, and that’s fine, but Pixelfed’s photo sharing, Peertube’s video sharing, and Plume’s long-form publishing, and now even chess-over-activitypub servers are becoming an important part of the fediverse in a way that’s more than just “yet another implementation of the same idea”. So if you group all that stuff together under “Mastodon” just because it uses the same protocol, you’re missing out on a whole lot.

                                                                                                                                                                                              1. 9

                                                                                                                                                                                                So if you group all that stuff together under “Mastodon” just because it uses the same protocol, you’re missing out on a whole lot.

                                                                                                                                                                                                If you’re a user, happily telling people that you’ve moved from Twitter to Mastodon, and you no longer doing Instagram but they can catch your photos over on Pixelfed, and you enjoyed some peertube videos, and doing your blogging on Plume…

                                                                                                                                                                                                what precisely are you “missing out on” just because you’re not lumping all of these disparate services under the meaningless blanket term “The Fediverse”? It strikes me as nearly the equivalent of complaining that users talk about using Facebook and YouTube and LiChess instead of just lumping it all under the blanket term HTTP.

                                                                                                                                                                                                1. 4

                                                                                                                                                                                                  you no longer doing Instagram but they can catch your photos over on Pixelfed, and you enjoyed some peertube videos, and doing your blogging on Plume…

                                                                                                                                                                                                  The whole point is you’re able to interoperate with all those services from a single account.

                                                                                                                                                                                                  Your friend joins a Mastodon instance and starts following your Peertube account, so they get all these videos in their stream. They’re going to be very confused if they think that they’re following only “Mastodon users” because Mastodon doesn’t offer the ability to publish videos.

                                                                                                                                                                                                  It strikes me as nearly the equivalent of complaining that users talk about using Facebook and YouTube and LiChess instead of just lumping it all under the blanket term HTTP.

                                                                                                                                                                                                  A better analogy would be if they thought they had to install a Facebook app and a Youtube app instead of realizing that they can both be accessed thru a web browser.

                                                                                                                                                                                                  1. 2

                                                                                                                                                                                                    What’s so confusing about being able to follow PeerTube publishers through Mastodon? I can follow quite a few YouTube publishers through Twitter; it’s just that it works out of the box instead of needing ITTT.

                                                                                                                                                                                                    1. 2

                                                                                                                                                                                                      Yes, that’s my point. It’s not confusing in the case you describe because you’re aware that YouTube and Twitter are different things.

                                                                                                                                                                                                      My example was about the case where someone isn’t aware that anything but Mastodon exists on the fediverse.

                                                                                                                                                                                                      1. 2

                                                                                                                                                                                                        My example was about the case where someone isn’t aware that anything but Mastodon exists on the fediverse.

                                                                                                                                                                                                        This is a marketing problem that “WELL ACTUALLY, you’re on the ‘fediverse’ silly user, not ‘Mastodon’” is certainly not going to solve, any more than “WELL ACTUALLY, you’re using GNU/Linux, Linux is the name of the kernel and GNU is…” fixed Herd adoption.

                                                                                                                                                                                                  2. 1
                                                                                                                                                                                                    s/HTTP/The World-Wide Web/
                                                                                                                                                                                                    

                                                                                                                                                                                                    Valid complaint.

                                                                                                                                                                                                2. 8

                                                                                                                                                                                                  Yeah, it’s a bit like people who insist that other people say Gnu/Linux, and I get it, they’re right, but they’re also not going to get what they want.

                                                                                                                                                                                                  1. 1

                                                                                                                                                                                                    I’ve said it before and I’ll say it again: Mastodon is to the Fediverse what Ubuntu used to be for GNU/Linux desktop systems.

                                                                                                                                                                                                    Not a judgement call there… just an observation. This type of naming issue will come up over and over again pretty much forever :)

                                                                                                                                                                                                    1. 3

                                                                                                                                                                                                      I’m….not sure what you’re trying to say? Like, it will overshadow it for a while but then it will be cleared up? Everyone just says Linux now

                                                                                                                                                                                                1. 2

                                                                                                                                                                                                  My personal laptop is a 2015 15” MacBook Pro with a 1 TB SSD

                                                                                                                                                                                                  Work is a 2018 15” MBP, all stock.

                                                                                                                                                                                                  1. 2

                                                                                                                                                                                                    After reading article, I still don’t understand why it’s not OData and how to constrain GraphQL queries to avoid DoS and poor optimization. Except that it looks more like SPARQL than SQL/OData: more “data-oriented” and less implementation details like tables.

                                                                                                                                                                                                    The server implements resolvers that fulfill specific graph queries—the client cannot ask for anything the server does not explicitly handle.

                                                                                                                                                                                                    If query language supports arbitrary filtering and arbitrary joins (well, maybe filtering is limited to ==), then what are differences from SQL? Yes, you can limit what can be joined and what fields support filtering, but you can do the same if your API talks SQL by parsing queries and looking into where and joins and returning 4xx if client requests too much. But this is very bad way to design APIs.

                                                                                                                                                                                                    Maybe I have wrong picture of GraphQL? As I understand, it works like SQL: client sends query, which looks like this:

                                                                                                                                                                                                    {
                                                                                                                                                                                                      human(id: 1002) {
                                                                                                                                                                                                        name
                                                                                                                                                                                                        appearsIn
                                                                                                                                                                                                        starships {
                                                                                                                                                                                                          name
                                                                                                                                                                                                        }
                                                                                                                                                                                                      }
                                                                                                                                                                                                    }
                                                                                                                                                                                                    

                                                                                                                                                                                                    It lists which fields to return, which associated entities to join, which fields of them to return, which filters to apply. Server replies with requested entities and their fields, arranged hierarchically. Queries are unconstrained, if you want to constraint them, you have to do it at the application level, by analyzing queries. If you don’t want clients to issue recursive queries which return infinite amounts of data, you have to construct “antivirus” which detects bad queries. Am I right?

                                                                                                                                                                                                    Maybe GraphQL just have bad documentation? For example,

                                                                                                                                                                                                    Along with functions for each field on each type:

                                                                                                                                                                                                    function Query_me(request) {
                                                                                                                                                                                                      return request.auth.user;
                                                                                                                                                                                                    }
                                                                                                                                                                                                    
                                                                                                                                                                                                    function User_name(user) {
                                                                                                                                                                                                      return user.getName();
                                                                                                                                                                                                    }
                                                                                                                                                                                                    

                                                                                                                                                                                                    Wut? Why freaking getters as part of protocol?

                                                                                                                                                                                                    1. 3

                                                                                                                                                                                                      If query language supports arbitrary filtering and arbitrary joins (well, maybe filtering is limited to ==), then what are differences from SQL? Yes, you can limit what can be joined and what fields support filtering, but you can do the same if your API talks SQL by parsing queries and looking into where and joins and returning 4xx if client requests too much. But this is very bad way to design APIs.

                                                                                                                                                                                                      Agreed.

                                                                                                                                                                                                      Maybe I have wrong picture of GraphQL? As I understand, it works like SQL: client sends query, which looks like this: It lists which fields to return, which associated entities to join, which fields of them to return, which filters to apply. Server replies with requested entities and their fields, arranged hierarchically.

                                                                                                                                                                                                      You certainly could design your GraphQL API to simply represent every model as a GraphQL Node, expose every attribute as a GraphQL field, and every join a GraphQL edge – in much the same way as you could design a REST API that way. In that case, yes, GraphQL is basically web SQL.

                                                                                                                                                                                                      But there’s no reason to do that, really, and every reason to carefully design your GraphQL API to be an API – an abstraction designed for the construction of the front end which doesn’t necessarily correspond in any meaningful way to the implementation of the backend, eg:

                                                                                                                                                                                                      query {
                                                                                                                                                                                                         starship_captains {
                                                                                                                                                                                                              shipName
                                                                                                                                                                                                              showName
                                                                                                                                                                                                         }
                                                                                                                                                                                                      }
                                                                                                                                                                                                      

                                                                                                                                                                                                      might under the hood be dealing with a Human table, a ships table, a show table, etc. It all depends on what your front end needs – this is the meat of API design, after all. It’s better in my mind to think of it as a kind of batch REST API protocol than any kind of SQL-for-the-frontend

                                                                                                                                                                                                      Wut? Why freaking getters as part of protocol?

                                                                                                                                                                                                      No. The documentation on that page seems very confusing. It is demonstrating what resolvers would look like on the backend for those types, written in javascript.

                                                                                                                                                                                                      Queries are unconstrained, if you want to constraint them, you have to do it at the application level, by analyzing queries.

                                                                                                                                                                                                      Not really, or at least, not in the way I’m understanding your sentence – you don’t peer at the text of a query and try to discern whether or not it touches something it shouldn’t, like some kind of PHP SQL sanitization library circa 1996. You constrain queries in a couple of way – firstly simply via the design of your API, in what you do and do not expose. Sensitive fields you never want pass to a front end shouldn’t be made part of a GraphQL node definition in the first place, in much the same way that you’d exclude them from the definition of your JSON response serialization or w/e in a REST API.

                                                                                                                                                                                                      Secondly, when a field or node should be exposed to some users but not to others, you do permission checks at the resolver level. Those poorly introduced functions in the document you linked to are backend resolvers – they’re functions you provide to your GraphQL library which are called every time it needs the values for those fields to build the response. You thus during the response rendering are gating each access to a field via your own access code, which is where you would implement relevant permissions and presentation logic. This is, again, not notably different than how you would normally construct a REST API.

                                                                                                                                                                                                      If you don’t want clients to issue recursive queries which return infinite amounts of data, you have to construct “antivirus” which detects bad queries. Am I right?

                                                                                                                                                                                                      No. You tell the GraphQL library a max call depth – stack depth, basically. 3 is pretty common. If the GraphQL library finds itself nesting the response any further than that, it aborts and you can send the client whatever response you’d like.