1.  

    I like the idea with asymetric fields.

    1. 6

      This article talks at-length about the tensions between tokio and mixed workloads that it cannot even theoretically serve well, and goes into a bunch of workarounds that needed to be put in-place to mask these tensions, instead of just avoiding the tensions to begin with.

      When a request’s dependency chain contains a mixture of components optimized for low latency (with short buffers in front) and components optimized for high throughput (with large buffers in front), you get the worst-of-all-worlds high-level system behavior.

      It’s like taking a school bus of kids to McDonalds and going through the drive-through and looping around once for each kid on the bus. Each loop is latency-optimized and whichever kid whose turn it is to order will receive their meal at a low latency after the time that they get to order it, but their sojourn time where they are waiting around doing nothing before being served explodes. But by taking the whole bus’s orders at once, the overall throughput is far higher, and because we have to accomplish a whole bus’s worth of orders anyway, there’s no point optimizing for latency below the whole-bus threshold. The idea has strong descriptive power for so many things in life, especially in software. Most people would probably not have a social media web site server kick off an apache mapreduce job for each GET to the timeline, even if the mapreduce job was to looking at a miniscule amount of data, because it is a similar (more exaggerated but still the same idea) mixing of low-latency components with high throughput components. Mixing of queue depths in a request chain degrades both latency and throughput.

      Sure, it’s software, you can mix queue depths, and in this case it will probably actually give you a social advantage by signalling to a wider group of subcommunities that you are invested in the products of their social software activities, but you are leaving a lot on the table from an actual performance perspective. This is pretty important queue theory stuff for people who want to achieve competitive latency or throughput. I strongly recommend (and give to basically everyone I work with on performance stuff) chapter 2: Methodology from Brendan Gregg’s book Systems Performance: Enterprise and Cloud which goes into the USE method for reasoning about these properties at a high level.

      Drilling more into the important properties of a a scheduler: parallelism (what you need for scaling CPU-bound tasks) is, from a scheduling perspective, the OPPOSITE of concurrency (what you need for blocking on dependencies). I love this video that illustrates this point: https://www.youtube.com/watch?v=tF-Nz4aRWAM&t=498s. By spending cycles on concurrent dependency management, you really cut into the basic compute resources available for accomplishing low-interactivity analytical and general CPU-bound work. The mind-exploder of this perspective is that a single-threaded execution is firmly in-between parallelism and concurrency on the programmer freedom vs scheduler freedom spectrum. The big con is people like Rob Pike who have managed to convince people (while selling compute resources) that concurrency is somehow an admirable path towards effective parallelism.

      Sure, you can essentially build a sub-scheduler that runs within your async scheduler that runs on top of your OS’s scheduler that runs on top of your cluster’s scheduler that runs on your business’s capex resources etc… etc… but it’s pretty clear that from the high-level, you can push your available resources farther by cutting out the dueling subschedulers for workloads where their tensions drive down the available utilization of resources that you’re paying for anyway.

      1.  

        Basically, they want one process to deal with latency optimized requests and other throughput optimized requests. I don’t know if there use case is valid but basically they try to separate them into different systems. Something you actually advocate for.

        It is not unlike having a UI main thread and offloading long running ops to background tasks which may, e.g. report progress back to the UI.

        That is entirely reasonable, in my opinion.

        They even do use OS scheduling for their needs by using low priority threads.

        The decision whether to use tokio for the long running ops is more questionable. It might be just a matter of “it works well enough and we prefer using the same APIs everywhere.”

        They also put the a big buffer in front with the channel (and a rather slow one, I think).

        I think it can obviously be optimized but the question is more, so they need to? Or is their a simpler solution that is also simpler considering the new APIs it requires devs to know about.

        1.  

          Just a random thought related to not using OS schedulers to their full extend: in-process app schedulers relate to OS schedulers in a similar way Electron relates to native desktop toolkits.

          Reimplementing part of the OS stack seems silly until you want apps to work cross platform. Then your limited scheduler is still a compromise but at least it works similarly everywhere. Overhead for threads, processes, fibers is pretty different in Operating Systems, at least it used to be.

          1.  

            Most services are deployed on Linux (and if not Linux then usually only one operating system), however, so discrepancies that cause performance drops on other operating systems are not that important.

            1.  

              I agree about deployment but the ecosystem would still prefer a cross platform aporoach.

              Being able to repeoduce problems on your dev machine without a vm is very valuable, though.

              Also I don’t think an optimized a single OS approach to async in rust would get adoption.

        1. 4

          If I’m understanding right, the main point here is to have a separate thread pool for async tasks which are expected to be CPU-heavy. This compares with the standard tokio approach which is to use its built in spawn_blocking threadpool, which takes a sync closure rather than a future. It probably deserved at least a mention for comparison, although it’s prominently discussed behind a couple of the links.

          This does raise a couple of questions for me:

          1. In the futures spawned onto this alternative executor, what kind of async subtasks are we talking about that aren’t sensitive to latency? Wouldn’t it be easy to accidentally smuggle in some I/O or an async lock that undoes the benefits?
          2. Does tokio have any problems with creating futures in the context of Runtime and spawning them on another? I’m honestly not sure, it just raises some red flags since Runtimes have their own timer threads and the like.

          BTW: At this time something has gone wrong with the code snippet for setting up the new runtime and the blocks don’t make sense. The code behind the GitHub link looks okay.

          1.  

            I cannot answer all your questions but here is the problem with spawn_blocking:

            They are meant for code that spends most of its time blocked, waiting on IO. A resonable strategy for this is to use a large thread number of threads, much higher than the actual paralkelism provided by the available CPUs.

            The article wants to have a solution for CPU heavy tasks that are executed on longer chunks for better throughput but works interfere with low latency requirements of other tasks. These tasks also are lower priority, apparently.

            E.g. in their use case they have some cheap (at least in terms of CPU) requests that they want to serve with low latency. And some CPU-heavy operations for which higher lately is acceptable or unavoidable and throughput more important.

            One example for me latency they give are liveness checks. That to me is weird since we are talking about acceptable latencies in the range of seconds and if I get it right they always recommend slicing even longer tasks into smaller chunks. Otherwise it becomes also difficult to provide meaningful liveness probe coverage for that part of the app.

            1.  

              are meant for code that spends most of its time blocked

              I do utilize them also for any CPU intense code like bcrypt - though I’m not intentionally relying on the threadpool of tokio for cpu intense tasks, I just don’t want to block my webserver when performing them.

          1. 4

            A good discussion.

            Good points made:

            • reference-counting has high overhead all the time because you’re constantly incrementing and decrementing reference counts, and they’re not only memory accesses, but atomic memory accesses. To some extent the extra instructions don’t matter because modern wide OoO CPUs seldom have enough instruction-level parallelism to keep them completely busy, so things off to the side of the main computation such as reference counts and bounds checks can be close to free (except for increased code size, icache size and bandwidth pressure). But atomic memory operations, ugh.

            • reference counting can have long pauses when the last reference to a large data structure goes away, and traditional reference counting GCs don’t have any mechanism to do that incrementally, whereas tracing collectors have had a lot of work making them incremental.

            The thing that all garbage collectors have to do is tell whether a piece of memory contains a pointer because they need to traverse all the pointers and they need to not traverse the things which are not pointers. In OCaml that’s done using what’s called tagging. Each eight bytes of memory on the heap, the ones that end in a one are integers, and the ones that end in the zero are pointers.

            So it’s very easy to tell in the garbage collector, as you’re walking along, which is which. That’s not the case for all garbage collectors. In a lot of other garbage collectors, there’s a separate data structure off to one side that you have to look up with some tag from each object to work out its layout, to decide where the pointers are.

            Right. Having to go off and follow a type tag or class pointer or something to find a data structure describing where the pointers are is slow. I’ve found it’s not worth it compared to simply scanning the whole object – at least if obvious things such as buffers and the bodies of strings and arrays known to contain only unboxed integers or floats are allocated or marked in a way that means you know there are no pointers there.

            I don’t much like taking tag bits out of objects. It’s hard to avoid in a completely dynamically-typed language, but if you have some kind of type declarations then most of the time it’s easy for the compiler to know either “this is definitely an integer” (or character, or float, or boolean), or else “this is definitely not an integer or character or float or boolean”. In either case you don’t need the tag bits. Which means you can store a full machine word sized integer without losing range off it from the tag bits. That’s important for a lot of algorithms especially in graphics, cryptography etc.

            The Boehm–Demers–Weiser garbage collector performs surprisingly well (not many supposedly clever GCs actually manage to beat it), at least if you don’t use the optional struct layout description feature.

            As with the OCaml collector, you just scan every pointer-sized pointer-aligned chunk of memory in the object and try to decide if it’s a pointer. Instead of looking at the LSB the primary and first test is whether the value lies within the garbage-collected heap. That’s just a range check – a subtraction and an unsigned-branch-if-less-than using values held in local variables (registers). That’s very quick – the same speed as the AND with 0x1 and beq/bne OCaml needs. On a 64 bit CPU with current RAM sizes the chance of a random bit pattern looking like a pointer into the heap by accident are very small. Even programs that use many GB of memory in total (e.g. web browsers) have only a few (or few dozen or few hundred) megabytes of objects that are expected to contain pointers. But the address space is 17,179,869,184 GB.

            There are some further tests around whether the supposed pointer is pointing to a VM page that is actually in use for storing objects, whether it’s to an object not marked as free etc. Those fall out as a side-effect of finding the start and size of the object.

            Very occasionally a random bit pattern / integer / float will look like a pointer to an actual object AND the last real pointer to that object has disappeared and so the object will be wrongly retained for longer than it should be. This is not a problem in practice. Tracing GCs do not promise to reclaim every object as soon as it could possibly be reclaimed.

            1. 2

              if you have some kind of type declarations then most of the time it’s easy for the compiler to know either “this is definitely an integer” (or character, or float, or boolean)

              Note: OCaml (as many functional languages) has algebraic datatypes (variants, sum types), a generalization of enums where constructors may or may not carry parameters; for example a value of a binary search tree is either an empty tree Empty, or a node Node(left, val, right) with three parameters (left subtree, node value, right subtree).

              The representation of algebraic datatypes used by OCaml is a (tagged) integer for parameter-less constructors, and a pointer to a heap block for constructors with parameters. As a consequence, many types heavily used in practice (option/maybe, success-or-error results, lists, sets, maps, various abstract syntax trees…) have values that are either immediate values (represented as integers) or pointers.

              GHC Haskell represents all data constructors as pointers, but uses the low bits of the pointer to store the constructor number when it is small (< 8 I guess?). This gives a representation that is as efficient in practice when there are few constant constructors, but I would still expect it to be slower when there are many (for example, to represent the tokens parsed by a lexer, or the state of a state machine).

              1. 2

                So single threaded reference counting like Rc in rust is significantly cheaper?

                I vaguely remember having heard of a mixed approach : Counting separately in threads and ocassionally syncing. For that again you need some sort of barrier but there is less work to do…

                I wonder how such an approach compares in a typical rust program with relatively few Arcs…

                1. 2

                  Reference counting in Rust is generally cheaper. You don’t have to touch the counters often, because you have an option of lending the content of Rc/Arc as a bare reference instead.

                  The unpredictable cost of destruction is also solvable. It’s not even a problem unless you have a large object graph or expensive destructors. When you have an object graph that you know how to drop efficiently, you can write a custom drop implementation (e.g. it’s a good idea for refcount-based trees to use an iterative algorithm to drop their children instead of recursing).

                  Arc exposes refcount and can be sent to a thread, so if all else fails, you can write a destructor that sends its execution to a queue or a thread.

                  if Arc::strong_count(&x) == 1 { rayon::spawn(move || drop(x)) }
                  
                2. 1

                  From a user perspective, the tagged pointers were never a problem for me: my integers either fit even in 31 bits, or a full-grown bignum library like Zarith is needed.

                  On a side note, the work being done on the OCaml GC in the past year got me actually interested in memory management, though I’m still far from being able to contribute to that work…

                  1. 1

                    reference-counting has high overhead all the time because you’re constantly incrementing and decrementing reference counts, and they’re not only memory accesses, but atomic memory accesses

                    The atomics can be avoided, if you are clever. And gc has some degree of constant overhead too, with barriers. Regardless, barriers are cheaper, and pay for themselves anyway with significantly faster memory management.

                    Each eight bytes of memory on the heap, the ones that end in a one are integers, and the ones that end in the zero are pointers

                    :/ they messed it up. That should be backwards. Additionally: what do they do about floating-point numbers, and packed arrays of small numbers?

                    if you have some kind of type declarations then most of the time it’s easy for the compiler to know either “this is definitely an integer” (or character, or float, or boolean), or else “this is definitely not an integer or character or float or boolean”. In either case you don’t need the tag bits

                    Hotspot has a cute trick: all the fields of a class are reordered, so that the pointers are contiguous. However, the practicality of this depends on language semantics, and any mechanism that requires lookups will probably incur extra overhead. Applications which specifically require specific-width integers are somewhat specialised; is it worth compromising whole-application performance for them? And if performance is truly critical, such routines may be written in assembly. And: a compiler may optimize routines which do not allocate to use unboxed integers.

                    The Boehm–Demers–Weiser garbage collector performs surprisingly well (not many supposedly clever GCs actually manage to beat it)

                    Is that still true? Back in the day, it had a really good allocator and could be used as a fast malloc implementation; but it seems that other allocator have caught up (je, tc, mi).

                    On a 64 bit CPU with current RAM sizes the chance of a random bit pattern looking like a pointer into the heap by accident are very small. Even programs that use many GB of memory in total (e.g. web browsers) have only a few (or few dozen or few hundred) megabytes of objects that are expected to contain pointers. But the address space is 17,179,869,184 GB.

                    Address space is 256tb on a modern CPU. 128tb in practice. And your pointers are not distributed randomly throughout it. And: precise GC is not so much interesting in itself as because it permits copying and compacting GC.

                  1. 20

                    People who learned Rust recently often remark on how complicated lifetimes and the error handling are.

                    I think with experience it changes significantly how it affects you but it is still true.

                    Lifetimes: For a lot of coding, lifetimes in Rust are nearly frictionless for me. And then you want to make an innocent seeming change and you have to add lifetime annotations everywhere or live with extra copies/reference counting. And the self-reference issue mentioned in the article is spot on. I hit that in real life quite often: Wanting to index data in multiple ways (without copies) and not being able to put that easily into the same struct. A lot is possible but it is work.

                    E.g. it would be possible to design a HashMap for values that contain their keys, exposed via a trait. But you cannot use the normal HashMap that way.

                    Error Handling: For error handling, if you go down the pedantic path of making specific error types: It is a lot of work. The opposite (e.g. with anyhow) is not. What I find hard to do in Rust, is the middleground: General handling of most errors and special treatment of special cases.

                    E.g. I recently followed “Crafting Interpeters” and wrote the interpreter in Rust. For return statements he used Exceptions in the Java implementation. I wanted to bake that into the error type. But since that also references a return value which is not always thread safe in my implementation (functions references), a thread safe anyhow Error could not be used to wrap that.

                    Once I let go of that idea and just switched to function return values that potentially indicated a Return, it was surprisingly easy. But this sort of puzzling until you find a workable solution with Rust constraints happens from time to time with me.

                    1. 4

                      just switched to function return values that potentially indicated a Return

                      I think I can totally understand what you mean with that. The first few weeks I coded in rust I also tried to “get it right”, using lifetimes, passing pointers with specific lifetimes in structs that originated from another function and so on. It was a huge mess and I’d become desperate trying to avoid clones/Rc or return oriented programming. Now I don’t actually have that problem at all - either I’m doing something very right or very wrong. And no I’m not using Rc/Arc/Clone all over the place (even though I’d strongly advise every beginner to do that instead of fighting the borrow checker for eternity).

                      What I meant to say is that my guess is also that many of those experiences come from people who aren’t writing in-depth data structures like doubly linked lists or b-trees with recursive references and so on. No they are just writing some application which doesn’t actually require the magic you’d write for one of these data structures. But for some reason they’re trying to be clever or over-optimize which ends up in such problems.

                      Overall I get the feeling that all comes down to what I experienced trying to learn racket for the first time: You’re holding thinking it wrong, you’re still trying to apply the rules of a different language universe.

                      making specific error types: It is a lot of work

                      I’d highly suggest beginners to use thiserror:

                      #[derive(Error, Debug)]
                      pub enum AuthError {
                          #[error("unknown data store error")]
                          Other(#[from] color_eyre::eyre::Error),
                          NotAuthenticated,
                         [...]
                          #[error("invalid login")]
                          InvalidCredentials
                      }
                      

                      Note that we’re also wrapping an “anyhow”-ish eyre::Error.

                      Which also allows for stuff like turning errors into http responses for a web service.

                      impl ResponseError for AuthError {
                          fn error_response(&self) -> HttpResponse {
                              match self {
                                  AuthError::NotAuthenticated => HttpResponse::Unauthorized().finish(),
                                 [...]
                                  e => { warn!("{}",e); HttpResponse::InternalServerError().finish() }
                              }
                          }
                      }
                      
                      1. 1

                        I guess the core message about lifetimes for Rust is: in “normal” programs they are surprisingly low friction with experience. I mostly hit borrow checker errors which are easily solvable locally by a temporary variable. That is amazing because you get real benefits from this.

                        But e.g. if you used stdout so far and want to pass an arbitrary Write for the output to your code, you cannot just add a dyn reference as you would do in Java or other languages. You need to do more work. You can use dyn dispatch+lifetimes and/or generics, potentially parametrizing a lot of types in your program. Alternatively, you have to use Rc<RefCell<_>> or Arc<Mutex<_>>. I am simplifying, depending on your circumstances, there are even more choices but that doesn’t make it simpler.

                        I appreciate what Rust is achieving with these constraints (which gives you confidence and ultimately higher productivity ) but while mostly low friction, lifetimes occasionally slow me down significantly. Honestly, everything else would be a miracle.

                        making specific error types: It is a lot of work

                        I know and use thiserror, it is great for what it does! I consider using it in practice “a lot of work”, e.g. adding at least one enum for each wrapped lib error.

                        You can even wrap anyhow::Error as a source but then you still need an extra conversion for these errors. Does eyre::Error solve this? (I wouldn’t really knew how without generic specialization)

                        1. 1

                          you still need an extra conversion for these errors

                          I typically have something like foo().context("adding xyz")?, which will automatically convert this to eyre::Error (and anyhow has something similar IIRC), then ? will convert it to your thiserror implementation.

                          1. 1

                            That makes a lot of sense for errors that you don’t expect to be handled.

                            Thank you.

                            1. 1

                              That and with color_eyre you get backtraces on top. So typically for DB calls you don’t expect do die and if they do you appreciate a stacktrace.

                    1. 7

                      The author about generics with comptime functions:

                      Despite my enthusiasm, I am simultaneously a bit apprehensive about how far this can be extended to stuff that can’t be fully inferred at compile time.

                      Funnily, my concerns would nearly point into the opposite direction: The mechanism is too general and therefore an IDE or other tool needs to evaluate Zig code for code completion etc.

                      But it does seem super elegant! (No Zig experience here yet.)

                      1. 13

                        That’s true, comptime can do a lot of things that an IDE would have a hard time understanding. The current plan is to add a subcommand to the Zig compiler to make it provide this information to the IDE. This is something that will be explored once the main work on the self-hosted compiler is complete.

                      1. 12

                        Java lived up to the hype,

                        Oof… What?

                        I love rust and I don’t think it is hyped, really. But why go and defend Java? Just hurts the argument IMO.

                        1. 39

                          While not every Java prophesy came true as foretold, I think it was very successful overall.

                          Android runs on Java, and it’s the most popular consumer OS in the world. So billions of devices really do use Java. It is write once, and run on a bunch of devices with vastly different hardware, and even Chromebooks and Windows 11. For over a decade it was probably the only sensible option for high-performance servers.

                          Keep in mind that Java hype happened when there weren’t many other options. There was no golang, Rust, or Swift. The fancier JVM languages haven’t been created yet. There was no “modern” C++. C# was a clone from an evil empire. JavaScript was an interpreted toy confined to a slow buggy environment. Lua was obscure and LuaJIT didn’t exist yet. You had C, Python in its infancy (and as slow as ever), Perl, and some more obscure languages that wouldn’t easily run on consumer machines.

                          Java looked really good in that company. And today despite much bigger choice of languages available, Java is still one of the most popular ones.

                          1. 1

                            The book “modern C++” was published in 1992. Unfortunately I can’t actually find a reference to that book online. As I recall it had a purple cover.

                            1. 4

                              I think of https://github.com/isocpp/CppCoreGuidelines when I hear “modern C++”.

                              1. 2

                                I thought the “modern C++” phrase originated with Alexandrescu’s book, Modern C++ Design, published in 2001.

                            2. 17

                              “There are only two kinds of languages: the ones people complain about and the ones nobody uses”

                              Java is probably the most critical programming language in the enterprise space, the backbone of the majority of mobile devices (Android), and was used to create the best-selling video game of all time (Minecraft). Its time is waning, but it lived up the hype.

                              1. 2

                                I don’t think these things are related. Surez Java is entrenched and sure it’s very popular in some verticals, but it hasn’t managed to become popular for the things C was popular for (mostly) and the run everywhere thing sort of fell flat as x86 crushed everyone. I’m not sure I would say it close to “lived up to the hype” but maybe it depends on one’s memories kof the hype.

                                1. 16

                                  Looking back now, I’d say it did. Normalizing managed code and garbage collection alone would qualify, but add robust, cross platform concurrency primitives and stable, cross platform GUI, classloaders and all the stuff about OSGi… I resent it for killing off Smalltalk’s niche, but it moved a much larger piece of software development in a good direction.

                                  1. 9

                                    You’re looking at niches that C has kept, rather than all the uses that C lost to Java. C used to be the default for most applications, not only low-level and performance-critical ones.

                                    On mobile, where “Wintel” didn’t have a stronghold, J2ME has crushed, and delivered some portability across devices with vastly different (and crappy) hardware.

                                    1. 8

                                      “Become popular for the things C was popular for” is kind of an impossible standard to hold any language to. Back in the day when Java was new, C was popular for everything. I know I personally wrote or worked on multiple backend business-logic-heavy services in C that would have been much better fits for Java had it existed in mature form at the time.

                                      Even at the height of Java’s early hype, I can’t remember anyone credibly suggesting it would replace C outright.

                                      Write once, run everywhere is still valuable. My team develops on MacOS, Windows, and Linux, and we deploy to a mix of x86 servers and on-premises low-power ARM devices (think Raspberry Pi). The same JVM bytecode (Kotlin, not Java, in our case) works identically enough in all those environments that we haven’t felt any urge to push people toward a single OS for dev environments. We have far more headaches with OS incompatibilities in our Python code than in our Kotlin code, though admittedly we’re not doing the same stuff in both languages.

                                      1. 7

                                        This seems slightly ahistorical — before C’s niche was God of the Gaps-ed into “tiny performant chunks of operating systems where zero copy memory twiddling is critical” it was everywhere. Like, people were writing web applications in C. People were doing things with C (and the godforsaken C-with-objects style of C++) that nobody today would go near in an unmanaged language. It was Java that showed the way. And I am no fan of Java-the-language.

                                        1. 4

                                          Which is itself funny because everything good about Java was available elsewhere. The difference was the giant education campaign and campaign to embed the vm everywhere.

                                          1. 2

                                            Oh, I know.

                                        2. 2

                                          Now we have AWS Graviton, I found Java people do have easier time.

                                      2. 3

                                        I think that nobody can deny that Java had been widely successful.

                                        If it lived up to the hype, then we first have to define what the hype was. If I remember correctly, our was first hyped for applets. Java’s successes have been elsewhere.

                                      1. 3

                                        The fact that more Bronze participants withdrew from the experiment makes the numbers hard to compare. But kudos for admitting it!

                                        Others here point out that they sacrifice thread safety for their Bronze approach. The non-GC code could be simplified similarly with some lines of unsafe code (which is a questionable approach, of course).

                                        1. 3

                                          I know someone who knows one of the investigators. They described the experiment as “How to torture 633 undergraduates for science”. Apparently it wasn’t much fun to participate in..

                                          1. 2

                                            Hm. I’m an undergrad, and it sounds like it would’ve been fun to participate in! (n=1)

                                            1. 1

                                              Do you know why?

                                              1. 1

                                                Probably just ‘cause most undergrads don’t need another 12 hours of work added to their life.

                                          1. 1

                                            And we are still on kotlin 1.3 mostly because our code is not fully compatible.

                                            What is more, there are some breakages with kotest - if I remember correctly, there is no version working with kotlin 1.3 and 1.4.

                                            It sure doesn’t help that we are using experimental unsigned types (our fault) that have some binary incompatiblity issues.

                                            1. 2

                                              I think starlark is very good pragmatic choice which might drive adoption.

                                              My dream would be a statically typed functional programming language for these kinds of things, though. Similarly in simplicity to Elm with nice auto-completiom. Maybe lazy.

                                              1. 3

                                                It would be interesting to know WHY the compiler complains about unused type parameters in the first place. PhantomData seems like a strange workaround at first.

                                                1. 2

                                                  I wish this was covered by anti competition law or other laws.

                                                  1. 7

                                                    The actual rejection comment of Hickson has a good point about DELETE: you usually don’t want a body.

                                                    But for PUT? Nothing except that “you wouldn’t want to PUT a form payload.” that’s a quote weak argument.

                                                    1. 10

                                                      It’s no problem to make a form that sends no body though…

                                                      1. 6

                                                        The spec says delete MAY have body. And practically speaking, you’d always want a csrf token anyway. I didn’t understand the PUT argument at all – not that it was weak, I simply didn’t understand what he was arguing – and posted another question in this thread about it.

                                                        1. 3

                                                          How is that a “good point”?

                                                          1. 1

                                                            I wanted to be somewhat generous as in if you write an API, many delete requests don’t have bodies.

                                                            But the sibling comment about the CSRF token is a good one.

                                                        1. 1

                                                          In addition, Apple doesn’t make an effort to provide the browser for testing on other platforms. So if people ask for Firefox support, that’s testable with reasonable effort and great dev tools. Asking for Safari support means asking to buy Apple hardware.

                                                          I hope that their one browser rule falls in court.

                                                          I really like some of Apple’s innovations and Safari was once innovative. But there closed strategy (we create closed ecosystems fully under our control) means that they don’t get pressure to stay innovative in areas that fall out of favor by their management. Especially, if they rather push native apps.

                                                          1. 4

                                                            I hope that their one browser rule falls in court.

                                                            I found it fascinating that the OP had praise for the rule that WebKit is the only allowed rendering engine on iOS:

                                                            This paints a bleak picture. The one saving grace today is that Apple blocks use of any non-WebKit engine on iOS, which protects that one environment, and the iOS market (in the US at least) is large enough that this means Safari must be prioritized.

                                                            He sees it as a stopgap against the total domination of Blink. The viewpoints are kind of like, “Apple is a big bully” vs. “Apple is a big bully that is at least protecting us all from an even more harmful bully” in the form of Google.

                                                            1. 2

                                                              Apple doesn’t say this anywhere officially, but you basically can test on any WebKit browser like GNOME Web (Epiphany). You’ll even see the exact same devtools UI as Safari.

                                                              1. 1

                                                                Cool tip!

                                                            1. 58

                                                              Safari isn’t the new IE, Chrome is. From the market share, to features only available on Chrome, to developers writing for Chrome only. Some of the points on this article even clearly show that.

                                                              1. 22

                                                                The “Widely accepted standards” in the OP made me laugh…

                                                                When I file bug reports/support tickets to websites saying your website doesn’t work with X or Y browser, the answer I almost always gets back is: “Use Chrome.” Occasionally(and more and more rarely) I’ll get back, OH right, we should fix that. Clearly nobody even bothers to test their stuff in the “other” browsers.

                                                                I keep filing tickets, anyway.

                                                                1. 2

                                                                  But that’s only part of what IE did.

                                                                  On the upside, the core of Chrome is open source. Enabling Microsoft Edge to basically just rebranding it is good, in my opinion, because if chrome got bad, they could immediately increase pressure by doing some little things better. (Disclaimer: haven’t used Edge)

                                                                  What I mostly subject to in Chrome is that Google pushed it so hard and probably unfairly by its other businesses. And that is somewhat similar to what Microsoft did with IE and Windows.

                                                                  1. 4

                                                                    Most Google websites work way better in Chrome than in Firefox. And most of the time, that’s a decision from Google, not technical limitations in Firefox.

                                                                    • Google Meet lets you blur out your background. This feature only uses web features (like WebGPU) which are supported perfectly fine by Firefox - but it’s disabled if you’re in a non-Chrome browser. It used to be that you could just change your user agent in Firefox, and the feature would work perfectly, but then Google changed their browser sniffing methods and changing the UA string doesn’t work anymore.
                                                                    • YouTube uses (used?) a pre-standard version of the Shadow DOM standard, which is implemented in fast C++ in Chrome, but Firefox only implements the actual final Shadow DOM standard, so YouTube uses (used?) an extremely slow JavaScript polyfill for non-Google browsers.

                                                                    Those are only the cases I know of where Google explicitly sabotages Firefox through running different code paths based on the browser. Even when they’re not intentionally sabotaging Firefox, I’m certain that Google optimizes their websites exclusively for Chrome without caring about other browsers. Firefox and Chrome are both extremely fast browsers, but they’re fast and slow at different things - and Google will make sure to stay within what Chrome does well, without caring about what Firefox does well or poorly. Optimizing for non-Google browsers seems like something that’s extremely far down Google’s priority list.

                                                                1. 22

                                                                  I’m honestly appalled that such an ignorant article has been written by a former EU MEP. This article completely ignores the fact that the creation of Copilot’s model itself is a copyright infringement. You give Github a license to store and distribute your code from public repositories. You do not give it a permission to Github to use it or create derivative works. And as Copilot’s model is created from various public code, it is a derivative of that code. Some may try to argue that training machine learning models is ‘fair use’, yet I doubt that you can call something that can regurgitate the entire meaningful portion of a file (example taken from Github’s own public dataset of exact generated code collisions) is not a derivative work.

                                                                  1. 13

                                                                    In many jurisdictions, as noted in the article, the “right to read is the right to mine” - that is the point. There is already an automatic exemption from copyright law for the purposes of computational analysis, and GitHub don’t need to get that permission from you, as long as they have the legal right to read the code (i.e. they didn’t obtain it illegally).

                                                                    This appears to be the case in the EU and Britain - https://www.gov.uk/guidance/exceptions-to-copyright - I’m not sure about the US.

                                                                    Something is not a derivative work in copyright law simply due to having a work as an “input” - you cannot simply argue “it is derived from” therefore “it is a derivative work”, because copyright law, not English language, defines what a “derivative work” is.

                                                                    For example, Markov chain analysis done on SICP is not infringing.

                                                                    Obviously, there are limits to this argument. If Copilot regurgitates a significant portion verbatim, e.g. 200 LOC, is that a derivative? If it is 1,000 lines where not one line matches, but it is essentially the same with just variables renamed, is that a derivative work? etc. I think the problem is that existing law doesn’t properly anticipate the kind of machine learning we are talking about here.

                                                                    1. 3

                                                                      Dunno how it is in other countries, but in Lithuania, I can not find any exceptions to use my works without me agreeing to it that fit what Github has done. The closest one could be citation, but they do not comply with the requirement of mentioning my name and work from which the citation is taken.

                                                                      I gave them the license to reproduce, not to use or modify - these are two entirely different things. If they weren’t, then Github has the ability to use all AGPL’d code hosted on it without any problems, and that’s obviously wrong.

                                                                      There is no separate “mining” clause. That is not a term in copyright. Notice how research is quite explicitly “non-comercial” - and I very much doubt that what Github is doing with Copilot is non-comercial in nature.

                                                                      The fact that similar works were done previously doesn’t mean that they were legal. They might have been ignored by the copyright owners, but this one quite obviously isn’t.

                                                                      1. 8

                                                                        There is no separate “mining” clause. That is not a term in copyright. Notice how research is quite explicitly “non-comercial” - and I very much doubt that what Github is doing with Copilot is non-comercial in nature.

                                                                        Ms. Reda is referring to a copyright reform adapted on the EU level in 2019. This reform entailed the DSM directive 2019/790, which is more commonly known for the regulations regarding upload filters. This directive contains a text and data mining copyright limitation in Art. 3 ff. The reason why you don’t see this limitation in Lithuanian law (yet), is probably because Lithuania has not yet transformed the DSM directive into its national law. This should probably follow soon, since Art. 29 mandates transformation into national law until June 29th, 2021. Germany has not yet completed the transformation either.

                                                                        That is, “text and data mining” now is a term in copyright. It is even legally defined on the EU level in Art. 2 Nr. 2 DSM directive.

                                                                        That being said, the text and data mining exception in Art. 3 ff. DSM directive does not – at first glance, I have only taken a cursory look – allow commercial use of the technique, but only permits research.

                                                                        1. 1

                                                                          Oh, huh, here it’s called an education and research exception and has been in law for way longer than that directive, and it doesn’t mention anything remotely translatable as mining. It didn’t even cross my mind that she could have been referring to that. I see that she pushed for that exception to be available for everyone, not only research and cultural heritage, but it is careless of her to mix up what she wants the law to be, and what the law is.

                                                                          Just as a preventative answer, no, Art 4. of DSM directive does not allow Github to do what it does either, as it applies to work that “has not been expressly reserved by their rightholders in an appropriate manner, such as machine-readable means in the case of content made publicly available online.”, and Github was free to get the content in an appropriate manner for machine learning. It is using the content for machine learning that infringes the code owners copyright.

                                                                        2. 5

                                                                          I gave them the license to reproduce, not to use or modify - these are two entirely different things. If they weren’t, then Github has the ability to use all AGPL’d code hosted on it without any problems, and that’s obviously wrong.

                                                                          Important thing is also that the copyright owner is often different person than the one, who signed a contract with GitHub and uploaded there the codes (git commit vs. git push). The uploader might agree with whatever terms and conditions, but the copyright owner’s rights must not be disrupted in any way.

                                                                          1. 3

                                                                            Nobody is required to accept terms of a software license. If they don’t agree to the license terms, then they don’t get additional rights granted in the license, but it doesn’t take away rights granted by the copyright law by default.

                                                                            Even if you licensed your code under “I forbid you from even looking at this!!!”, I can still look at it, and copy portions of it, parody it, create transformative works, use it for educational purposes, etc., as permitted by copyright law exceptions (details vary from country to country, but the gist is the same).

                                                                        3. 10

                                                                          Ms. Reda is a member of the Pirate Party, which is primarily focused on the intersection of tech and copyright. She has a lot of experience working on copyright-related legislation, including proposals specifically about text mining. She’s been a voice of reason when the link tax and upload filters were proposed. She’s probably the copyright expert in the EU parliament.

                                                                          So be careful when you call her ignorant and mistaken about basics of copyright. She may have drafted the laws you’re trying to explain to her.

                                                                          1. 16

                                                                            It is precisely because of her credentials that I am so appalled. I cannot in a good mind find this statement not ignorant.

                                                                            The directive about text mining very explicitly specifies “only for “research institutions” and “for the purposes of scientific research”.” Github and it’s Copilot doesn’t fall into that classification at all.

                                                                            1. 3

                                                                              Indeed.

                                                                              Even though my opinion of Copilot is near-instant revulsion, the basic idea is that information and code is being used to train a machine learning system.

                                                                              This is analogous to a human reviewing and reading code, and learning how to do so from lots of examples. And someone going through higher ed school isn’t “owned” by the copyright owners of the books and code they read and review.

                                                                              If Copilot is violating, so are humans who read. And that… that’s a very disturbing and disgusting precedent that I hope we don’t set.

                                                                              1. 6

                                                                                Copilot doesn’t infringe, but GitHub does, when they distribute Copilot’s output. Analogously to humans, humans who read do not infringe, but they do when they distribute.

                                                                                1. 1

                                                                                  Why is it not the human that distributes copilots output?

                                                                                  1. 1

                                                                                    Because Copilot first had to deliver the code to the human. Across the Internet.

                                                                                2. 4

                                                                                  I don’t think that’s right. A human who learns doesn’t just parrot out pre-memorized code, and if they do they’re infringing on the copyright in that code.

                                                                                  1. 2

                                                                                    The real question, that I think people are missing, is learning itself is a derivative work?

                                                                                    How that learning happens can either be with a human, or with a machine learning algorithm. And with the squishiness and lack of insight with human brains, a human can claim they insightfully invented it, even if it was derived. The ML we’re seeing here is doing a rudimentary version of what a human would do.

                                                                                    If Copilot is ‘violating’, then humans can also be ‘violating’. And I believe that is a dangerous path, laying IP based claims on humans because they read something.

                                                                                    And as I said upthread, as much as I have a kneejerk that Copilot is bad, I don’t see how it could be infringing without also doing the same to humans.

                                                                                    And as a underlying idea: copyright itself is a busted concept. It worked for the time before mechanical and electrical duplication took hold at a near 0 value. Now? Not so much.

                                                                                    1. 3

                                                                                      I don’t agree with you that humans and Copilot are learning somewhat the same.

                                                                                      The human may learn by rote memorization, but more likely, they are learning patterns and the why behind those patterns. Copilot also learns patterns, but there is no why in its “brain.” It is completely rote memorization of patterns.

                                                                                      The fact that humans learn the why is what makes us different and not infringing, while Copilot infringes.

                                                                                      1. 2

                                                                                        Computers learn syntax, humans learn syntax and semantics.

                                                                                        1. 1

                                                                                          Perfect way of putting it. Thank you.

                                                                                      2. 3

                                                                                        No I don’t think that’s the real question. Copying is treated as an objective question (and I’m willing to be corrected by experts in copyright law) ie similarity or its lack determine copying regardless of intent to copy, unless the creation was independent.

                                                                                        But even if we address ourselves to that question, I don’t think machine learning is qualitatively similar to human learning. Shoving a bunch of data together into a numerical model to perform sequence prediction doesn’t equate to human invention, it’s a stochastic copying tool.

                                                                                    2. 3

                                                                                      It seems like it could be used to shirk the effort required for a clean room implementation. What if I trained the model on one and only one piece of code I didn’t like the license of, and then used the model to regurgitate it, can I then just stick my own license on it and claim it’s not derivative?

                                                                                    3. 2

                                                                                      Ms. Reda is a member of the Pirate Party

                                                                                      She has left the Pirate Party years ago, after having installed a potential MEP “successor” who was unknown to almost everyone in the party; she subsequently published a video not to vote Pirates because of him as he was allegedly a sex offender (which was proven untrue months later).

                                                                                      1. 0

                                                                                        Why exactly do you think someone from the ‘pirate party’ would respect any sort of copyright? That sounds like they might be pretty biased against copyright…

                                                                                        1. 3

                                                                                          Despite a cheeky name, it’s a serious party. Check out their programme. Even if the party is biased against copyright monopolies, DRM, frivolous patents, etc. they still need expertise in how things work currently in order to effectively oppose them.

                                                                                      2. 4

                                                                                        Have you read the article?

                                                                                        She addresses these concerns directly. You might not agree but you claim she “ignores” this.

                                                                                        1. 1

                                                                                          And as Copilot’s model is created from various public code, it is a derivative of that code.

                                                                                          Depends on the legal system. I don’t know what happens if I am based in Europe but the guys doing this are in USA. It probably just means that they can do whatever they want. The article makes a ton of claims about various legal aspects of all of this but as far as I know Julia is not actually a lawyer so I think we can ignore this article.

                                                                                          In Poland maybe this could be considered a “derivative work” but then work which was “inspired” by the original is not covered (so maybe the output of the network is inspired?) and then you have a separate section about databases so maybe this is a database in some weird way of understanding it? If you are not a lawyer I doubt you can properly analyse this. The article tries to analyse the legal aspect and a moral aspect at the same time while those are completely different things.

                                                                                        1. 2

                                                                                          In our company, we have started to generate the k8s resources with the language that we use in our backends: kotlin

                                                                                          We check in the generated resources as yamls. The yamls are applied with fluxcd.

                                                                                          This feels incredibly nice, e. g. :

                                                                                          • you can easily inspect the resulting resources,
                                                                                          • you can easily diff in for what you deploy,
                                                                                          • you can make quick emergency adjustments by editing the generated files directly (haven’t needed that yet but nice to have),
                                                                                          • you can easily unit test your resources.
                                                                                          1. 4

                                                                                            In my experience, TCO is often relied on for correct behavior as opposed to just being bonus performance. That means the following is a pretty significant downside!

                                                                                            But since it is only applied under certain circumstances, the downside of is that when it is not applied, we won’t be made aware of it unless we check for it.

                                                                                            Are there any plans to add some sort of syntax to indicate to the compiler that it should error if it can’t perform TCO? OCaml has @tailcall for this purpose and Scala has @tailrec, although in Scala’s case the compiler won’t even try to do TCO unless you request it.

                                                                                            Also: how does Elm handle source maps for TCO functions? As I recall, increased debugging difficulty was one of the reason V8 backed out of automatically doing TCE (and switched to backing explicit tail calls instead).

                                                                                            1. 2

                                                                                              The article buries the lede, but it’s exactly announcement of a tool that checks Elm code for TCO.

                                                                                              1. 1

                                                                                                Maybe my coffee hasn’t fully kicked in yet, or maybe it’s been too long since I’ve programmed in a proper functional language, but how or when would TCO change behavior?

                                                                                                1. 4

                                                                                                  One example that comes to mind: TCO can be the difference between “calling this function with these arguments always returns the correct answer” and “calling this function with these arguments sometimes returns the correct answer, and sometimes crashes with a stack overflow.”

                                                                                                  1. 1

                                                                                                    Put slightly differently, TCO makes some partial functions total.

                                                                                                    1. 3

                                                                                                      If running out of stack frames and/or stack memory counts as making a function partial, then does the OS possibly running out of memory mean that no functions are ever total?

                                                                                                      Right? Since “the stack” isn’t an explicit abstraction in most programming languages, I don’t think it’s quite correct/useful to say that a recursive function is partial when it can’t be TCO’d.

                                                                                                      1. 3

                                                                                                        I don’t think it’s out of bounds to say that. It really depends on the conceptual model that your language is providing. For example, it seems to be an operating principle of zig: every function in the stdlib that allocates takes an allocator so you can handle out of memory exceptions intelligently.

                                                                                                        But, I get your point: it isn’t an explicit part of the conceptual model of most languages so it’s shifting the window a bit to refer to non-TCO functions as partial. I think it’s potentially useful perspective and, for what it’s worth, most languages don’t really describe their functions as total/partial anyways.

                                                                                                  2. 2

                                                                                                    Recursion in a language without TCO feels like a fool’s errand. Source: I tried to implement some recursive algorithms in C…. on 16-bit Windows. On a modern CPU, you can probably get away with recursion even if it eats stack, because you have virtual memory and a shitload of address space to recurse into. Not so much on a 286….

                                                                                                    1. 1

                                                                                                      I definitely agree! I never, ever, write recursive code in a language that doesn’t have a way to at least opt-in to recursion optimization.

                                                                                                      But to me, that’s still “performance” and not really “behavior”. But maybe I’m defining these things a little differently.

                                                                                                    2. 1

                                                                                                      Not sure if @harpocrates meant this but:

                                                                                                      Often, if the recursion depth is large enough, the unoptimized version uses a lot of stack space, even potentially an unbounded amount, where the optimized version uses constant space.

                                                                                                      So the unoptimized version is not only slower but actually crashes if the stack is used up.

                                                                                                      1. 1

                                                                                                        Hmm. I assumed that “bonus performance” would include memory usage. And I would’ve lumped the resulting stack overflow in with “performance” concern, but I guess I can see how that might actually be considered behavior since the “optimized” version will complete and an unoptimized version might not.

                                                                                                        It’s just weird because I don’t think anybody would tell me that putting an extra private field that I never use on a class would be a “behavior change” even though that makes my class use more memory and therefore will OOM on some algorithms when it might not OOM if I remove the field.

                                                                                                        1. 1

                                                                                                          An additional private field in a class that is used a million times on the stack might be similar, true. The heap may often be bigger (citation needed).

                                                                                                          With recursive calls, you can use up a lot of memory for innocent looking code that e.g. just sums up a list of integers.

                                                                                                  1. 4

                                                                                                    This is really exciting: For Android C++ FFI, Rust is a surprisingly good match. I honestly would have expected a larger API share to be problematic.

                                                                                                    This is obviously dependent on the code base. The stats for the chrome base library are worse.

                                                                                                    I guess Android works better because it is already designed as an FFI and not an internal library.