1. 11

    Thank you for the wonderful comments last week.

    I wrote an Earley parser. And a Pratt parser. The Pratt parser is what I’ve been looking for all this time: a modular recursive descent parser. What it lacks in formalism it makes up with in brevity and power-to-weight.

    Now, I need to choose a host language. I’d like to pick Rust, but I’m not sure it has a ready-made GC solution right now, and I don’t want to go down that rabbit hole. That leaves C++, JVM, or OTP. Any thoughts?

    1. 3

      What kind of language are you looking to interpret/execute? The three platforms you mention all have really different tradeoffs.

      1. 3

        A Lisp-esque language under the hood with a non-Lisp syntax on top. Idea is the functional paradigm can subsume the other two big paradigms (imperative/logic). Can use the CEK machine for proper tail call handling, so that isn’ta requirement of the host. Big thing I’m looking for is a GC (whether lib or built-in) and a language I like that I can target it with.

      2. 2

        For rust, you can wrap everything in a Rc, or if you have multiple threads an Arc, or if you want tracing GC you can use this, or if you just need epoch-style reclamation there’s crossbeam-epoch or if you just need hazard pointers there’s conc. I’ve had a lot of success with crossbeam-epoch in lock-free systems I’ve built.

        1. 1

          Rc (and friends) would need cycle detection, no? Maybe the thing to do is just use Rc and do research on cycle-detection algorithms to see if they are hard or not.

          I looked at Epoch and hazard pointers and wasn’t sure if they were ok as a general GC. I need to do more reading. Thanks!

          1. 2

            Yeah, you can create memory leaks with Rc cycles in rust. But this is rarely an issue in most use cases. Rust memory can feel a little confusing at first, but cycles tend not to come up once you learn some different idioms for structuring things in non-cyclical ways.

            For example, if you want to build a DAG, you can quickly implement it with a HashMap from ID to Node, where ID is some monotonic counter that you maintain. Each Node can contain Vec’s of incoming and outgoing edges. You can implement your own RC-like thing that tracks the sum of indegree and outdegree, and when it reaches 0, you just remove the Node out of the containing hashmap. For the cases where performance or concurrency concerns rule out this approach (which are rare and should not be pursued until this is measured to be a bottleneck) you can always write Rust like C with unsafe pointers, Box::into_raw, dereferencing inside unsafe blocks, and free’ing by calling Box::from_raw (actually calling drop() on that if you want to be explicit about what’s happening, but it will be dropped implicitly when it goes out of scope). Use mutexes on shared state until… basically always, but if you REALLY want to go lock-free, that’s when you can benefit from things like crossbeam-epoch to handle freeing of memory that has been detached from mutable shared state but may still be in use by another thread.

            Feel free to shoot me an email if you’re curious about how something can be done in Rust! I know it can be overwhelming when you’re starting to build things in it, and I’m happy to help newcomers get past the things I banged my head against the wall for days trying to learn :)

        2. 2

          FWIW, many languages written in C or C++ use arenas to hold the nodes that result from parsing . For example, CPython uses this strategy. I’m pretty sure v8 does too. So you don’t manage each node individually, which is a large load on the memory allocator/garbage collector – you put them all in a big arena and then free them at once.

          1. 2

            Save the earth , use C++ or OTP

            1. 1

              You also have Go and .NET Core as possible host runtimes.

              1. 1

                What about Nim? It seems to be a memory-safe language with low-latency GC, macros, and produces C. I mean, the Schemes are ideal if doing language building with LISP thing underneath since they start that way.

              1. 1

                It sounds like what you’re describing is similar to how interface types work in Go, although they’re polymorphic only over methods, not fields like in your example. If you have a “struct” (these are C++‘s structs, not C’s structs; they have a vtable attached) like

                type Foo struct {
                    Name string
                }
                

                And a getter:

                func (f Foo) GetName() string {
                    return f.Name
                }
                

                Then you can call this function with a value of type Foo:

                func GetNameFromAnything(thing interface { GetName() string }) string {
                    return thing.GetName();
                }
                

                Like this:

                func main() {
                    f := Foo{Name: "maxhallinan"}
                    fmt.Printf("%s\n", GetNameFromAnything(f));
                }
                

                Go is almost statically duck-typed; you can, in-line, say “this method takes anything that can give me its name,” et voila. The biggest downside, I think, and the biggest difference between what you asked in your question and Go, is that access to these members is mediated through a vtable. Because vtables only contain methods in Go, you have to write getters for the things you want to be able to access through interfaces, so there’s no true way to say “this method takes anything that has a field called Name on it”.

                1. 12

                  Downloading and installing (even signed) packages over unencrypted channels also allows an attacker with the ability to inspect traffic to be able to take an inventory of the installed software on the system. An attacker could use that to his/her advantage by knowing which software, and its vulnerabilities, is installed. The attacker then has the exact binary and can replicate the entire system, tailoring exploits to the inventory on the target system.

                  1. 21

                    They cover this in the linked page; they claim there’s such a small number of packages that merely knowing the length of the ciphertext (which, of course, HTTPS can’t hide) is enough to reliably determine which package is being transmitted.

                    Perhaps doing it over HTTP2, so you get both encryption and pipelining, would get you sufficient obfuscation, but HTTPS alone doesn’t.

                    1. 2

                      I’m not sure how http2 helps. You still can generally take a look at traffic bursts and get an idea of how much was transferred. You’d have to generate blinding packets to hide the actual amount of traffic that is being transferred, effectively padding out downloads to the size of the largest packages.

                      1. 2

                        But figuring out which packages would require solving the knapsack problem, right? Instead of getting N requests of length k_i, you get one request of length \sum k_i. Although, now that I think about it, the number of packages that you download at once is probably small enough for it to be tractable for a motivated attacker.

                        Padding is an interesting possibility but I think some of the texlive downloads are >1GB; that’s a pretty rough price to pay to download vi or whatever.

                      2. 1

                        True. Given that each package download is its own connection, it wouldn’t be too difficult for an attacker to deduce which package is being downloaded given the size of the transmitted encrypted data. The attacker would need to keep a full mirror of the package repo (disk space is cheap, so plausible). I wonder if the same would apply to package repos served over Tor Onion Services.

                    1. 18

                      No images and yet better content than 95% of the medium posts I’ve read in the last few months.

                      1. 3

                        But memes are fun! Are you anti-fun?

                        1. 14

                          If memes are fun, I’m anti-fun

                          1. 1

                            I don’t mind the occasional relevant comic inserted in to text but if you have to put a meme in between every line then you need to stop.

                        1. 1

                          At work, I’m still trying to debug weird crashes in an open-source geometry kernel (vcglib), which is proving to be extremely tedious. Thank god for address sanitizer, at least.

                          Outside of work, continuing trying to get my sourdough down pat and consistent, as well as continuing work on my large-scale vertical pen plotter. This week, I’m trying to come up with the Jacobian that relates the lengths of the two chains it hangs from to its xy-position on the wall, so that I can start doing path planning in chain-length-space.

                          1. 2

                            I’ve been working on a motion controller for a giant vertical pen plotter (with a work area that tops out at around 400m^2), and I had been trying to get everything working with a BeagleBoard Black, but last week I tried to bring up the board only to discover that BBB are a pain in the butt and there are undocumented restrictions on how certain pins on the board can be used.

                            After all of the frustrations from last week, this week I’ve decided to scrap the BBB entirely and try to do a design with the ESP32, which I’ve been meaning to try out for a bit anyway. I finished the schematic yesterday, parts should be here by the end of the week and I can hopefully bring up the board and get some steppers stepping over the weekend!

                            1. 4

                              For the next couple of weeks I’m going to continue to hack on the Thymio, Aseba, and Jetson stuff I have for my Masters.

                              I’m working with one other on an (academic) year long project to implement SLAM to map unfamiliar environments, and then use evolutionary algorithms to identify the best strategy to achieve various goals. Since the bot hardware is cheap and cheerful, the main challenges we’re facing is developing any kind of useful SLAM output, and navigation through it. For simulating and developing improved strategies, the plan is to handoff to the Jetson strapped on top and use the GPGPU cores (OpenCL) to be quicker than the PIC device in the bot itself.

                              1. 1

                                Cool projects. I don’t know what SLAM is in this context. Turning it into a link would be helpful.

                                1. 3

                                  They probably are talking about Simultaneous Localization and Mapping

                                  1. 1

                                    Oh, OK. That makes sense. Thanks.

                              1. 5

                                Lenses are great fun, glad to see they’re making their way to more communities :)

                                1. 1

                                  I’ve been meaning to look into lenses for quite some time. They seem to be useful when working with Redux. But when looking into how to implement them there are a lot of varieties of lenses. Do you know any good explanation of the theory bebind lenses?

                                  1. 8

                                    There are a few theories, the nicest one is as follows: a value of type Lens s a is a demonstration that values of type s can be split into (x, a) for some unknown type a. If we talk about Iso a b which says that a and b are the same type then Lens s a = exists x. Iso s (x, a).

                                    Another way to look at it is that a lens pairs a getter and a setter, so we can say Lens s a is the same as { get :: s -> a, set :: a -> s -> s }.

                                    All of these constructions have to follow some rules (the intuitive ones) like “if you set and then get you’ll get what you just set” also “if you set and then set it’s the same as just setting that second time” and also “if you set what you just got it’s the same as doing nothing at all”.

                                    If you consider the construction a | b to mean “a or b, and I know which one” so that Int | Int is distinct from Int then we can consider a thing called a Prism s a which, parallel to above, can be considered the same as Prism s a = exists x . Iso s (x | a). In other words, when you’ve got a Prism s a you’ve got evidence for how to see s as either an a or something else.

                                    Prisms aren’t really getter-setter pairs, but they’re something close. You need to be able to always convert an a directly into an s, but also maybe pull an a out (unless it’s the other thing). So, we say Prism s a is the same as { retract :: a -> s, extract :: s -> Maybe a } where Maybe means you might fail to produce that a sometimes.

                                    This one is really a foreign concept. Here’s an example: there’s a Prism String Int which is a printer/parser pair for integers. In other words, we consider String to be equivalent to x | Int where x stands in for “all of the strings which fail to parse as an int”. So retract just prints an Int out and extract is our (potentially failing) parser. Prisms are printer/parser pairs.

                                    All of these things can combine with one another. There are interesting theories around “Profunctor and/or Functor transformations” which make all of the compositions fall out as natural consequences. It’s good for implementers but only so useful for just understanding the thing.

                                    There are also “type changing” lenses/prisms which let you handle generic/templated types well and more exotic “optics” like traversals and folds. All of these things continue to compose with one another nicely and fit into the same universe of ideas. It’s really quite rich!

                                    1. 2

                                      This is the best, most intuitive description of lenses I’ve ever seen. Thank you!

                                      1. 1

                                        I’m glad!

                                      2. 1

                                        In the first sentence: “… some unknown type x” not “a”. Whoops!

                                  1. 6

                                    GNU Autotools: just kill this horrific pile of garbage with fire. Especially terrible when libtool is used. Related: classic PHK rant.

                                    CMake: slightly weird language (at least a real language which is miles ahead of autocraptools), bad documentation.

                                    Meson: somewhat inflexible (you can’t even set global options like b_lundef conditionally in the script!) but mostly great.

                                    GYP: JSON files with conditions as strings?! Are you serious?

                                    Gradle: rather slow and heavy, and the structure/API seems pretty complex.

                                    Bazel/Buck/Pants (nearly the same thing): huge mega build systems for multiple languages that take over everything, often with little respect for these langauges’ build/package ecosystems. Does anyone outside Googlefacetwitter care about this?

                                    Grunt, Rake, many others: good task runners, but they’re not build systems. Do not use them to build.

                                    1. 6

                                      Related: classic PHK rant.

                                      This one is even better since its observations apply to even more FOSS than libtool. It also has some laughable details on that along with the person who wrote libtool apologizing in the comments IIRC.

                                      1. 3

                                        I recalled that too, but it was David McKenzie of Autoconf who popped up to apologize.

                                        1. 1

                                          Oh OK. Thanks for the correction. At least one owned up to their mess. :)

                                      2. 3

                                        FWIW: bazelbuckpants seem to be written for the proprietary software world: a place where people are hesitant to depend on open-source dependencies in general, and people have a real fear (maybe fear is strong, but still) of their dependencies and environment breaking their build. I use them when I’m consulting, because I can be relatively certain that the build will be exactly the same in a year or so and I don’t like having to fix compilation errors in software I wrote a year ago.

                                        1. 2

                                          I’m with you on Grunt, but Rake is actually a build tool with Make-style rules and recipes for building and rebuilding files when their dependencies change. There’s a case that Rake is just Make ported to Ruby syntax. It’s just more commonly used as a basic task runner.

                                          https://ruby.github.io/rake/doc/rakefile_rdoc.html

                                          1. 1

                                            I think Make is also somewhat close to a task runner. It has dependencies, but not much else. You write compiler invocations manually…

                                            1. 1

                                              It sort of has default rules for building a number of languages, though these aren’t terribly helpful anymore.

                                              I also use Make as task runner. Mostly to execute the actual build system, because everybody knows how to run make and most relevant systems probably have Make installed in one form or another.

                                          2. 1

                                            We use Pants here at Square, in our Java monorepo. It works quite nicely, actually. For our Go monorepo, we just use standard Go tooling, but I’ve volunteered to convert to Pants if anyone can get everyone to move to a single monorepo. They won’t, because every Rails project has its own repo, and the Rails folks like it that way.

                                          1. 2

                                            A competent CPU engineer would fix this by making sure speculation doesn’t happen across protection domains. Maybe even a L1 I$ that is keyed by CPL.

                                            I feel like Linus of all people should be experienced enough to know that you shouldn’t be making assumptions about complex fields you’re not an expert in.

                                            1. 22

                                              To be fair, Linus worked at a CPU company,Transmeta, from about ‘96 - ‘03(??) and reportedly worked on, drumrolll, the Crusoe’s code morphing software, which speculatively morphs code written for other CPUs, live, to the Crusoe instruction set.

                                              1. 4

                                                My original statement is pretty darn wrong then!

                                                1. 13

                                                  You were just speculating. No harm in that.

                                              2. 15

                                                To be fair to him, he’s describing the reason AMD processors aren’t vulnerable to the same kernel attacks.

                                                1. 1

                                                  I thought AMD were found to be vulnerable to the same attacks. Where did you read they weren’t?

                                                  1. 17

                                                    AMD processors have the same flaw (that speculative execution can lead to information leakage through cache timings) but the impact is way less severe because the cache is protection-level-aware. On AMD, you can use Spectre to read any memory in your own process, which is still bad for things like web browsers (now javascript can bust through its sandbox) but you can’t read from kernel memory, because of the mitigation that Linus is describing. On Intel processors, you can read from both your memory and the kernel’s memory using this attack.

                                                    1. 0

                                                      basically both will need the patch that I presume will lead to the same slowdown.

                                                      1. 9

                                                        I don’t think AMD needs the separate address space for kernel patch (KAISER) which is responsible for the slowdown.

                                                2. 12

                                                  Linus worked for a CPU manufacturer (Transmeta). He also writes an operating system that interfaces with multiple chips. He is pretty darn close to an expert in this complex field.

                                                  1. 3

                                                    I think this statement is correct. As I understand, part of the problem in meltdown is that a transient code path can load a page into cache before page access permissions are checked. See the meltdown paper.

                                                    1. 3

                                                      The fact that he is correct doesn’t prove that a competent CPU engineer would agree. I mean, Linux is (to the best of my knowledge) not a CPU engineer, so he’s probably wrong when it comes to get all the constraints of the field.

                                                      1. 4

                                                        So? This problem is not quantum physics, it has to do with a well known mechanism in CPU design that is understood by good kernel engineers - and it is a problem that AMD and Via both avoided with the same instruction set.

                                                        1. 3

                                                          Not a CPU engineer, but see my direct response to the OP, which shows that Linus has direct experience with CPUs, frim his tenure at Transmeta, a defunct CPU company.

                                                          1. 5

                                                            frim his tenure at Transmeta, a defunct CPU company.

                                                            Exactly. A company whose innovative CPU’s didn’t meet the markets needs and were shelved on acquisition. What he learned at a company making unmarketable, lower-performance products might not tell him much about constraints Intel faces.

                                                            1. 11

                                                              What he learned at a company making unmarketable, lower-performance products might not tell him much about constraints Intel faces.

                                                              This is a bit of a logical stretch. Quite frankly, Intel took a gamble with speculative execution and lost. The first several years were full of erata for genuine bugs and now we finally have a userland exploitable issue with it. Often security and performance are at odds. Security engineers often examine / fuzz interfaces looking for things that cause state changes. While the instruction execution state was not committed, the cache state change was. I truly hope intel engineers will now question all the state changes that happen due to speculative execution. This is Linus’ bluntly worded point.

                                                              1. 3

                                                                (At @apg too)

                                                                My main comment shows consumers didnt pay for more secure CPU’s. So, that’s not really a market requirement even if it might prevent costly mistakes later. Their goal was making things go faster over time with acceptable watts despite poorly-written code from humans or compilers while remaining backwards compatible with locked-in customers running worse, weirder code. So, that’s what they thought would maximize profit. That’s what they executed on.

                                                                We can test if they made a mistake by getting a list of x86 vendors sorted by revenues and market share. (Looks.) Intel is still a mega corporation dominating in x86. They achieved their primary goal. A secondary goal is no liabilities dislodging them from that. These attacks will only be a failure for them if AMD gets a huge chunk of their market like they did beating them to proper 64-bit when Intel/HP made the Itanium mistake.

                                                                Bad security is only a mistake for these companies when it severely disrupts their business objectives. In the past, bad security was a great idea. Right now, it mostly works with the equation maybe shifting a bit in future as breakers start focusing on hardware flaws. It’s sort of an unknown for these recent flaws. All depends on mitigations and how many that replace CPU’s will stop buying Intel.

                                                              2. 3

                                                                A company whose innovative CPU’s didn’t meet the markets needs and were shelved on acquisition.

                                                                Tons of products over the years have failed based simply on timing. So, yeah, it didn’t meet the market demand then. I’m curious about what they could have done in the 10+ years after they called it quits.

                                                                might not tell him much about constraints Intel faces.

                                                                I haven’t seen confirmation of this, but there’s speculation that these bugs could affect CPUs as far back as Pentium II from the 90s….

                                                            2. 1

                                                              The fact that he is correct doesn’t prove that a competent CPU engineer would agree.

                                                              Can you expand on this? I’m having trouble making sense of it. Agree with what?

                                                        1. 3

                                                          AMD claims “zero vulnerability due to AMD architecture differences”, but without any explanation. Could someone enlighten us about this?

                                                          1. 10

                                                            AMD’s inability to generate positive PR from this is really an incredible accomplishment for their fabled PR department.

                                                            1. 7

                                                              The spectre PoC linked elsewhere in this thread works perfectly on my Ryzen 5. From my reading, it sounds like AMD processors aren’t susceptible to userspace reading kernelspace because the cache is in some sense protection-level-aware, but the speculative-execution, cache-timing one-two punch still works.

                                                              1. 4

                                                                From reading the google paper on this it’s not quite true but not quite false. According to google AMD and ARM are vulnerable to a specific limited form of Spectre. They’re not susceptible to Meltdown. The google Spectre PoCs for AMD and ARM aren’t successful in accessing beyond the user’s memory space so it’s thought that while the problem exists in some form it doesn’t lead to compromise as far as we currently know.

                                                                1. 2

                                                                  aren’t successful in accessing beyond the user’s memory space so … it doesn’t lead to compromise as far as we currently know.

                                                                  Well, no compromise in the sense of breaking virtualization boundaries or OS-level protection boundaries, but still pretty worrying for compromising sandboxes that are entirely in one user’s memory space, like those in browsers.

                                                                2. 4

                                                                  I just found this in a Linux kernel commit:

                                                                  AMD processors are not subject to the types of attacks that the kernel page table isolation feature protects against. The AMD microarchitecture does not allow memory references, including speculative references, that access higher privileged data when running in a lesser privileged mode when that access would result in a page fault.

                                                                  1. 4

                                                                    Which is a much stronger statement than in the AMD web PR story. Given that it is AMD, I would not be surprised if their design does not have the problem but their PR is unable to make that clear.

                                                                  2. 2

                                                                    AMD is not vulnerable to Meltdown, an Intel-specific attack.

                                                                    AMD (and ARM, and essentially anything with a speculative execution engine on the planet) is vulnerable to Spectre.

                                                                  1. 4

                                                                    Seems like this is not an ARM or an AMD bug. If so, good news for them and a second even bigger wakeup call for Intel after the management processor debacle.

                                                                    1. 2

                                                                      How do you judge ARM unaffected? I saw the patch regarding AMD but there is a diff regarding ARM floating around that could be tied to this: https://lwn.net/Articles/740393/

                                                                      1. 1

                                                                        It sounds like ARM is affected, but the impact is not as severe: http://lists.infradead.org/pipermail/linux-arm-kernel/2017-November/542751.html

                                                                        Their benchmarks say that syscalls roughly doubled in cost, but unlike the Intel bug, the cache remains intact. The Intel bug is particularly bad because the page cache has to be fully flushed on each userspace/kernel transition.

                                                                        1. 3

                                                                          A bit nitpicky, but my read of that is that the bug itself is equally present on ARM as on Intel (unlike AMD, which isn’t affected), but due to ARM’s virtual memory design it’s possible to implement the workaround (PTI) with less of a performance hit. Which is a better outcome for ARM, but more like luck than better QA, since those architectural features on ARM weren’t designed for the purpose of implementing something like PTI, they just happen to be useful for it.

                                                                          1. 1

                                                                            Ah, you’re right, where I said “bug” I meant “bugfix”.

                                                                      1. 3

                                                                        here

                                                                        He is so “crazy” that one of his former colleagues has a totem that they use to mock him in his absence? Fascinating.

                                                                        1. 4

                                                                          I would just keep in mind that Michael is a member of this community when making comments like this.

                                                                          1. 2

                                                                            I think being skeptical is fine, and perhaps even warranted. However, the top level link /seems/ like a fairly reasonable read to me. Judge it on its content.

                                                                            1. 5

                                                                              I think the problem is he speaks pretty authoritatively despite his expertise being based on just his experiences, or his perception of his experiences. It sounds good, but a lot of things sound good and are only occasionally true, not always true.

                                                                              I used to think he was just idiosyncratic til I had an experience that contradicted his claims, and then he just said “wait til you enter the real world.” I’m actually a few years older than him I believe. He’s incapable of imagining that things may be different. Even if he were right, it’s a very rigid view that doesn’t account for contrary evidence. I’m wary of trying to learn anything from people like that.

                                                                            2. 2

                                                                              He showed himself to be pretty out there at Google, when he rage-quit with a particularly nutty letter to the entire company after not getting a promotion. Lots of bits of that letter were memes when I left Google (“I have T7-9 vision!”).

                                                                              1. 2

                                                                                He showed himself to be pretty out there at Google, when he rage-quit with a particularly nutty letter to the entire company after not getting a promotion.

                                                                                It wasn’t about not getting a promotion. I was marked down in “Perf” for speaking up about an impending product failure. (Naively, I thought that pointing out the problem would be enough to get it fixed. It was obvious to me what was about to go wrong– and I was later proven right– but I lacked insight into how to convince anyone.) I found out years later that I was put on a suspected unionist list. Needless to say, the whole experience was traumatic. There’s a lot that I haven’t talked about.

                                                                                The mailing list activity… I’m embarrassed by that. I did not handle the stress well.

                                                                                Lots of bits of that letter were memes when I left Google (“I have T7-9 vision!”).

                                                                                Isn’t it a sign of success, if people are talking about your mistakes several years later?

                                                                              2. 1

                                                                                Personally I think Michael O Church is a genius but I’m keenly aware that there’s a fine line between genius and madness. /u/churchomichael is not Michael O Church but seems to be another very intelligent writer but without the anger and national and international politics interest.

                                                                                1. 1

                                                                                  doing some digging he seems….. crazy.

                                                                                  I’ve had a lot of difficult experiences, some related to the political exposure that comes from being outspoken in a treacherous industry. I’ve needed treatment for some of the after-effects.

                                                                                  Like, he got banned from Hacker News, and also Wikipedia.

                                                                                  And Quora, too! Wikipedia I actually deserved; that was 2006 and I was being a jerk. The Quora ban was specifically requested by Y Combinator after they bought it.

                                                                                  He just seems to spend an insane amount of time writing ranty comments/articles/etc online and not much else.

                                                                                  It’s not that much of my time.

                                                                                  See /u/churchomichael

                                                                                  That’s not me. I’m as surprised as you are that someone would name his account in homage to me. There are also Reddit accounts (and even a subreddit!) that exist to mock me.

                                                                                  Dude just seems to want to complain.

                                                                                  No, I’d like to fix things, but the odds of that are very, very poor.

                                                                                  He has 45 suspected sockpuppet accounts on Wikipedia

                                                                                  Yeah, most of those accounts don’t exist. That’s a hit piece. I’m embarrassed by some of what I did on Wikipedia in 2003-6, but I never had 45 alternate accounts, though I did use so-called “role accounts” back when it was accepted.

                                                                                1. 5

                                                                                  Well, it depends. If they cache object files that do not need recompilation and thus only compile anything changed, then times should drop down considerably.

                                                                                  1. 4

                                                                                    That doesn’t help with first builds, and doesn’t help if you’re changing headers which are included by most files (such as changing whether debug mode is on or off, or changing log levels, or simply changing a core abstraction).

                                                                                    1. 4

                                                                                      First builds are a lost cause anyway. But after them, with proper binary caching I guess you can have faster loops. As for changing core abstractions, how often would that happen? How often will that header file change?

                                                                                      Also, do not forget that this is an organization with too much computer power and unlimited disk space. So keep those object files there for some time, have them tagged properly with regards to the changes that you mention and eventually the build will be fast.

                                                                                      Is such a system a lot of work? Definitely? But (a) they have lots of people to work on such a problem and (b) they do devote actual time on it, instead of being like us, thinking about it for 5 minutes on a work break without that much information about their internals.

                                                                                      1. 9

                                                                                        First builds are definitely not a lost cause. As I said, crass compiling Linux, a rather big project, takes just over 5 minutes.

                                                                                        Header files need to change annoyingly often; remember, in C++, even adding a private member requires changing the header file.

                                                                                        1. 6

                                                                                          Linux is an equivalently-complex project, but in a significantly simpler language (C). This might be more of a damning indictment of C++ than it is of Chrome.

                                                                                        2. 4

                                                                                          Languages such as Modula-3 and Go were designed to compile fast on the first time. Far as metaprogramming, the industrial LISP’s compile faster than C++. The D language compiles way faster. The C++ language was just poorly designed. Sacrificing reasonable compilation speed, either first or later build, better be done to obtain a different, worthwhile benefit. That’s not the case with C++.

                                                                                          Edit: An organization with piles of computer power and disk space could use the time wasted on compile overhead to do stuff such as static analysis or property-based testing with that time.

                                                                                    1. 4

                                                                                      I’m curious to see how POSIX-compliant the Windows terminal is; when I tried ssh with the Linux subsystem, the Windows terminal couldn’t display Mutt correctly, I think because it was struggling with curses’ heavy usage of escape sequences, but I’m not sure. I ended up having to use MobaXTerm, which is perhaps my least favorite piece of software I still end up using every day. Getting to switch to something lighter and with fewer attention-grabby bits all around my workspace would be excellent.

                                                                                      On that note, if anyone here knows any good, simple terminals for Windows (my dream is alacritty or st, but for Windows), I would love to hear about them!

                                                                                      1. 1

                                                                                        I’ve been using ConEmu for the last six months. While it’s not a POSIX-compliant terminal, I found out that it performs fairly well with most. I haven’t used it with Mutt, though, but Vim works just fine.

                                                                                      1. 32

                                                                                        I wasn’t implying. I was stating a fact.

                                                                                        And he’s wrong about that.

                                                                                        https://github.com/uutils/coreutils is a rewrite of a large chunk of coreutils in Rust. POSIX-compatible.

                                                                                        1. 12

                                                                                          So on OpenBSD amd64 (the only arch rust runs on… there are at least 9 others, 8 7 or 6 of which rust doesn’t even support! )… this fails to build:

                                                                                          error: aborting due to 19 previous errors
                                                                                          
                                                                                          error: Could not compile `nix`.
                                                                                          warning: build failed, waiting for other jobs to finish...
                                                                                          error: build failed
                                                                                          
                                                                                          1. 8

                                                                                            Yep. The nix crate only supports FreeBSD currently.

                                                                                            https://github.com/nix-rust/nix#supported-platforms

                                                                                          2. 8

                                                                                            The openbsd guys are stubborn of course, though they might have a point. tbh somebody could just fork a BSD OS to make this happen. rutsybsd or whatever you want to call it.

                                                                                            edit: just tried to build what you linked, does cargo pin versions and verify the downloads? fetching so many dependencies at build time makes me super nervous. Are all those dependencies BSD licensed? It didn’t even compile on my machine, maybe the nixos version of rust is too old - i don’t know if the rust ecosystem is stable enough to base an OS on yet without constantly fixing broken builds.

                                                                                            1. 10

                                                                                              just tried to build what you linked, does cargo pin versions and verify the downloads?

                                                                                              Cargo pins versions in Cargo.lock, and coreutils has one https://github.com/uutils/coreutils/blob/master/Cargo.lock.

                                                                                              Cargo checks download integrity against the registry.

                                                                                              For offline builds, you can vendor the dependencies: https://github.com/alexcrichton/cargo-vendor, downloading them all and working from them.

                                                                                              Are all those dependencies BSD licensed?

                                                                                              Yes. Using: https://github.com/onur/cargo-license

                                                                                              Apache-2.0/MIT (50): bit-set, bit-vec, bitflags, bitflags, block-buffer, byte-tools, cc, cfg-if, chrono, cmake, digest, either, fake-simd, filetime, fnv, getopts, glob, half, itertools, lazy_static, libc, md5, nodrop, num, num-integer, num-iter, num-traits, num_cpus, pkg-config, quick-error, rand, regex, regex-syntax, remove_dir_all, semver, semver-parser, sha2, sha3, tempdir, tempfile, thread_local, time, typenum, unicode-width, unindent, unix_socket, unreachable, vec_map, walker, xattr

                                                                                              BSD-3-Clause (3): fuchsia-zircon, fuchsia-zircon-sys, sha1

                                                                                              MIT (21): advapi32-sys, ansi_term, atty, clap, data-encoding, generic-array, kernel32-sys, nix, onig, onig_sys, pretty-bytes, redox_syscall, redox_termios, strsim, term_grid, termion, termsize, textwrap, void, winapi, winapi-build

                                                                                              MIT OR Apache-2.0 (2): hex, ioctl-sys

                                                                                              MIT/Unlicense (7): aho-corasick, byteorder, memchr, same-file, utf8-ranges, walkdir, walkdir

                                                                                              It didn’t even compile on my machine, maybe the nixos version of rust is too old - i don’t know if the rust ecosystem is stable enough to base an OS on yet without constantly fixing broken builds.

                                                                                              This is one of my frequent outstanding annoyances with Rust currently: I don’t have a problem with people using the newest version of the language as long as their software is not being shipped on something with constraints, but at least they should document and test the minimum version of rustc they use.

                                                                                              coreutils just checks against “stable”, which moves every 6 weeks: https://github.com/uutils/coreutils/blob/master/.travis.yml

                                                                                              Can you give me rustc --version?

                                                                                              Still, “commitment to stability” is a function of adoption. If, say, Ubuntu start shipping a Rust version in an LTS release, more and more people will try to stay backward compatible to that.

                                                                                              1. 2

                                                                                                rustc 1.17.0 cargo 0.18.0

                                                                                                1. 11

                                                                                                  You’re probably hitting https://github.com/uutils/coreutils/issues/1064 then.

                                                                                                  Also, looking at it, it is indeed that they use combinatior functionality that became available in Rust 1.19.0. std::cmp::Reverse can be easily dropped and replaced by other code if 1.17.0-support would be needed.

                                                                                                  Thanks, I filed https://github.com/uutils/coreutils/issues/1100, asking for better docs.

                                                                                                  1. 1

                                                                                                    thanks for doing that, great community outreach :P

                                                                                              2. 5

                                                                                                Rust is “stable” in the sense that it is backwards compatible. However it is evolving rapidly so new crates or updates to crates may require the latest compiler. This won’t mean you’ll have to constantly fix broken builds; just that pulling in new crates may require you to update to the latest compiler.

                                                                                                1. 4

                                                                                                  Yes, Cargo writes a Cargo.lock file with versions and hashes. Application developers are encouraged to commit it into version control.

                                                                                                  Dependencies are mostly MIT/Apache in the Rust world. You can use cargo-license to quickly look at the licenses of all dependencies.

                                                                                                  Redox OS is fully based on Rust :)

                                                                                                2. 4

                                                                                                  Although you’re right to point out that project, one of Theo’s arguments had to do with compilation speeds:

                                                                                                  By the way, this is how long it takes to compile our grep:

                                                                                                  0m00.62s real 0m00.63s user 0m00.53s system

                                                                                                  … which is currently quite undoable for any Rust project, I believe. Cannot say if he’s exaggerating how important this is, though.

                                                                                                  1. 10

                                                                                                    Now, at least for GNU coreutils, ./configure runs a good chunk of what rust coreutils needs to compile. (2mins for a full release build, vs 1m20.399 just for configure). Also, the build is faster (coreutils takes a minute).

                                                                                                    Sure, this is comparing apples and oranges a little. Different software, different development states, different support. The rust compiler uses 4 cores during all that (especially due to cargo running parallel builds), GNU coreutils doesn’t do that by default (-j4 only takes 17s). On the other hand: all the crates that cargo builds can be shared. That means, on a build farm, you have nice small pieces that you know you can cache - obviously just once per rustc/crate pairing.

                                                                                                    Also, obviously, build farms will pull all kinds of stunts to accelerate things and the Rust community still has to grow a lot of that tooling, but I don’t perceive the problem as fundamental.

                                                                                                    EDIT: heh, forgot --release. And that for me. Adjusted the wording and the times.

                                                                                                    1. 5

                                                                                                      OpenBSD doesn’t use GNU coreutils, either; they have their own implementation of the base utils in their tree (here’s the implementation of ls, for example). As I understand it, there’s lots of reasons they don’t use GNU coreutils, but complexity (of the code, the tooling, and the utils themselves) is near the top of the list.

                                                                                                      1. 6

                                                                                                        Probably because most(all?) the openBSD versions of the coreutils existed before GNU did, let alone GNU coreutils. OpenBSD is a direct descendant of Berkeley’s BSD. Not to mention the licensing problem. GNU is all about the GPL. OpenBSD is all about the BSD(and it’s friends) license. Not that your reason isn’t also probably true.

                                                                                                      2. 2

                                                                                                        That means, on a build farm, you have nice small pieces that you know you can cache - obviously just once per rustc/crate pairing.

                                                                                                        FWIW sccache does this I think

                                                                                                      3. 7

                                                                                                        I think it would be more fair to look at how long it takes the average developer to knock out code-level safety issues + compiles on a modern machine. I think Rust might be faster per module of code. From there, incremental builds and caching will help a lot. This is another strawman excuse, though, since the Wirth-like languages could’ve been easily modified to output C, input C, turn safety off when needed, and so on. They compile faster than C on about any CPU. They’re safe-by-default. The runtime code is acceptable with it improving even better if outputting C to leverage their compilers.

                                                                                                        Many defenses of not using safe languages is that easy to discount. And OpenBSD is special because someone will point out that porting a Wirth-like compiler is a bit of work. It’s not even a fraction of the work and expertise required for their C-based mitigations. Even those might have been easier to do in a less-messy language. They’re motivated more by their culture and preferences than any technical argument about a language.

                                                                                                        1. 3

                                                                                                          It’s a show stopper.

                                                                                                          Slow compile times are a massive problem for C++, honestly I would say it’s one of the biggest problems with the language, and rustc is 1-2 orders of magnitude slower still.

                                                                                                          1. 12

                                                                                                            It’s a show stopper.

                                                                                                            Hm, yet, last time I checked, C++ was relatively popular, Java (also not the fastest in compilation) is doing fine and scalac is still around. There’s people working on alternatives, but show stopper?

                                                                                                            Sure, it’s an huge annoyance for “build-the-world”-approaches, but well…

                                                                                                            Slow compile times are a massive problem for C++, honestly I would say it’s one of the biggest problems with the language, and rustc is 1-2 orders of magnitude slower still.

                                                                                                            This heavily depends on the workload. rustc is quite fast when talking about rather non-generic code. The advantage of Rust over C++ is that coding in mostly non-generic Rust is a viable C alternative (and the language is built with that in mind), while a lot of C++ just isn’t very useful over C if you don’t rely on templates very much.

                                                                                                            Also, rustc stable is a little over 2 years old vs. C/C++ compilers had ample headstart there.

                                                                                                            I’m not saying the problem isn’t there, it has to be seen in context.

                                                                                                            1. 9

                                                                                                              C++ was relatively popular, Java (also not the fastest in compilation) is doing fine and scalac is still around.

                                                                                                              Indeed, outside of gamedev most people place zero value in fast iteration times. (which unfortunately also implies they place zero value in product quality)

                                                                                                              rustc is quite fast when talking about rather non-generic code.

                                                                                                              That’s not even remotely true.

                                                                                                              I don’t have specific benchmarks because I haven’t used rust for years, but see this post from 6 months ago that says it takes 15 seconds to build 8k lines of code. The sqlite amalgamated build is 200k lines of code and has to compile on a single core because it’s one compilation unit, and still only takes a few seconds. My C++ game engine is something like 80k if you include all the libraries and builds in like 4 seconds with almost no effort spent making it compile fast.

                                                                                                              edit: from your coreutils example above, rustc takes 2 minutes to build 43k LOC, gcc takes 17 seconds to build 270k, which makes rustc 44x slower…

                                                                                                              The last company I worked at had C++ builds that took many hours and to my knowledge that’s pretty standard. Even if you (very) conservatively say rustc is only 10x slower, they would be looking at compile times measured in days.

                                                                                                              while a lot of C++ just isn’t very useful over C if you don’t rely on templates very much.

                                                                                                              That’s also not true at all. Only small parts of a C++ codebase need templates, and you can easily make those templates simple enough that it has little to no effect on compile times.

                                                                                                              Also, rustc stable is a little over 2 years old vs. C/C++ compilers had ample headstart there.

                                                                                                              gcc has gotten slower over the years…

                                                                                                              1. 6

                                                                                                                Even if you (very) conservatively say rustc is only 10x slower,

                                                                                                                Rustc isn’t slower to compile than C++. Depends on the amount of generics you use, but the same argument goes for C++ and templates. Rust does lend itself to more usage of generics which leads to more compact but slower-compiling code, which does mean that your time-per-LOC is higher for Rust, but that’s not a very useful metric. Dividing LOCs is not going to get you a useful measure of how fast the compiler is. I say this as someone who has worked on both a huge Rust and a huge C++ codebase and know what the compile times are like. Perhaps slightly worse for Rust but not like a 2x+ factor.

                                                                                                                The main compilation speed problem of Rust vs C++ is that it’s harder to parallelize Rust compilations (large compilation units) which kind of leads to bottleneck crates. Incremental compilation helps here, and codegen-units already works.

                                                                                                                Rust vs C is a whole other ball game though. The same ball game as C++ vs C.

                                                                                                                1. 2

                                                                                                                  That post, this post, my experience, lines, seconds… very scientific :) Hardware can be wildly different, lines of code can be wildly different (especially in the amount of generics used), and the amount of lines necessary to do something can be a lot smaller in Rust, especially vs. plain C.

                                                                                                                  To add another unscientific comparison :) Servo release build from scratch on my machine (Ryzen 7 1700 @ 3.9GHz, SATA SSD) takes about 30 minutes. Firefox release build takes a bit more. Chromium… even more, closer to an hour. These are all different codebases, but they all implement a web browser, and the compile times are all in the same ballpark. So rustc is certainly not that much slower than clang++.

                                                                                                                  Only small parts of a C++ codebase need templates

                                                                                                                  Maybe you write templates rarely, but typical modern C++ uses them all over the place. As in, every STL container/smart pointer/algorithm/whatever is a template.

                                                                                                                  1. 2

                                                                                                                    To add another unscientific comparison :) Servo release build from scratch on my machine (Ryzen 7 1700 @ 3.9GHz, SATA SSD) takes about 30 minutes. Firefox release build takes a bit more. Chromium… even more, closer to an hour. These are all different codebases, but they all implement a web browser, and the compile times are all in the same ballpark. So rustc is certainly not that much slower than clang++.

                                                                                                                    • Firefox 35.9M lines of code
                                                                                                                    • Chromium 18.1M lines of code
                                                                                                                    • Servo 2.25M lines of code

                                                                                                                    You’re saying compiling 2.25M lines of code for a not feature complete browser that takes 30 minutes is comparable to compiling 18-35M lines of code in ‘a bit more’?

                                                                                                                    1. 4

                                                                                                                      Line counters like this one are entirely wrong.

                                                                                                                      This thing only counted https://github.com/servo/servo. Servo code is actually split among many many repositories.

                                                                                                                      HTML parser, CSS parser, URL parser, WebRender, animation, font sanitizer, IPC, sandbox, SpiderMonkey JS engine (C++), Firefox’s media playback (C++), Firefox’s canvas thingy with Skia (C++), HarfBuzz text shaping (C++) and more other stuff — all of this is included in the 30 minutes!

                                                                                                                      plus,

                                                                                                                      the amount of lines necessary to do something can be a lot smaller in Rust

                                                                                                                      1. 2

                                                                                                                        Agreed, it grossly underestimates how much code Chromium contains. You are aware of the horrible depot_tools and the amount of stuff they pull in?

                                                                                                                        My point was, you are comparing a feature incomplete browser that is a smaller code base at least in one order of magnitude but takes 30 minutes compared to “closer to an hour” of Chromium. If think your argument doesn’t hold - you are free to provide data to prove me wrong.

                                                                                                                      2. 3

                                                                                                                        Servo’s not a monolithic codebase. Firefox is monolithic. It’s a bad comparison.

                                                                                                                        Chromium is also mostly monolithic IIRC.

                                                                                                              2. 2

                                                                                                                Free- and OpenBSD can compile userland from source:

                                                                                                                So decent compile times are of essence, especially if you are targeting multiple architectures.

                                                                                                              3. 6

                                                                                                                Well, ls is listed as only semi done, so he’s only semi wrong. :)

                                                                                                                1. 11

                                                                                                                  The magic words being “There has been no attempt”. With that, especially by saying “attempt”, he’s completely wrong. There have been attempts. At everything he lists. (he lists more here: https://www.youtube.com/watch?v=fYgG0ds2_UQ&feature=youtu.be&t=2112 all of what Theo mentions has been written, in Rust, some even have multiple projects, and very serious ones at that)

                                                                                                                  For a more direct approach at BSD utils, there’s the redox core utils, which are BSD-util based. https://github.com/redox-os/coreutils

                                                                                                                  1. 2

                                                                                                                    Other magic words are “POSIX compatible”. Neither redox-os nor the uutils linked by @Manishearth seem to care particularly about this. I haven’t looked all that closely, but picking some random utils shows that none of them is fully compliant. It’s not even close, so surely they can’t be considered valid replacements of the C originals.

                                                                                                                    For example (assuming that I read the source code correctly) both implementations of cat lack the only POSIX-required option -u and the implementations of pwd lack both -L and -P. These are very simple tools and are considered done at least by uutils…

                                                                                                                    So, Theo may be wrong by saying that no attempts have been made, but I believe a whole lot of rather hard work still needs to be done before he will acknowledge serious efforts.

                                                                                                                    1. 5

                                                                                                                      This rapidly will devolve into a no true scotsman argument.

                                                                                                                      https://github.com/uutils/coreutils#run-busybox-tests

                                                                                                                      uutils is running the busybox tests. Which admittedly test for something other than POSIX compliance, but neither the GNU or BSD coreutils are POSIX-compliant anyway.

                                                                                                                      uutils is based on the GNU coreutils, redox’s ones are based on the BSD ones, which is certainly a step in the right direction and can certainly be counted as an attempt.

                                                                                                                      For example (assuming that I read the source code correctly) both implementations of cat lack the only POSIX-required option -u and the implementations of pwd lack both -L and -P.

                                                                                                                      Nobody said they were complete.

                                                                                                                      All we’re talking about is Theo’s rather strong point that “there has been no attempt”. There has.

                                                                                                                2. 1

                                                                                                                  I’m curious about this statement in TdR in the linked email

                                                                                                                  For instance, rust cannot even compile itself on i386 at present time because it exhausts the address space.

                                                                                                                  Is this true?

                                                                                                                  1. 15

                                                                                                                    As always with these complaints, I can’t find any reference to exact issues. What’s true is that LLVM uses quite a bit of memory to compile and rustc builds tend not to be the smallest themselves. But not that big. Also, recent improvements have definitely worked here

                                                                                                                    I do regularly build the full chain on a ACER c720p, with FreeBSD, which has a celeron and 2 GB of RAM, I have to shut down the X server and everything before, but it works.

                                                                                                                    As usual, this is probably an issue of the kind “please report actual problems, and we work fixing that”. “We want to provide a build environment for OpenBSD and X Y Z is missing” is something we’d be happy support, some fuzzy notion of “this doesn’t fulfill our (somewhat fuzzy) criteria” isn’t actionable.

                                                                                                                    Rust for Haiku does ship Rust with i386 binaries and bootstrapping compilers (stage0): http://rust-on-haiku.com/downloads

                                                                                                                    1. 10

                                                                                                                      As always with these complaints, I can’t find any reference to exact issues.

                                                                                                                      Only because it’s a thread on the OpenBSD mailing lists, people reading that list have the full context of the recent issues with Firefox and Rust.

                                                                                                                      I’ll assume you just don’t follow the list so here is the relevant thread lang/rust: update to 1.22.1

                                                                                                                      • For this release, I had lot of problem for updating i386 to 1.22.1 (too much memory pressure when compiling 1.22 with 1.21 version). So the bootstrap was initially regenerated by crosscompiling it from amd64, and next I regenerate a proper 1.22 bootstrap from i386. Build 1.22 with 1.22 seems to fit in memory.

                                                                                                                      As I do all this work with a dedicated host, it is possible that ENOMEM will come back in bulk.

                                                                                                                      And if the required memory still grows, rustc will be marked BROKEN on i386 (and firefox will not be available anymore on i386)

                                                                                                                      1. 7

                                                                                                                        Only because it’s a thread on the OpenBSD mailing lists, people reading that list have the full context of the recent issues with Firefox and Rust.

                                                                                                                        Sure, but has this:

                                                                                                                        And if the required memory still grows, rustc will be marked BROKEN on i386 (and firefox will not be available anymore on i386).

                                                                                                                        Reached the Rust maintainers? (thread on the internals mailing list, issue on rust-lang/rust?)

                                                                                                                        I’m happy to be corrected.

                                                                                                                        1. 7

                                                                                                                          Reached the Rust maintainers? (thread on the internals mailing list, issue on rust-lang/rust?)

                                                                                                                          I don’t know. I don’t follow rust development, however the author of that email is a rust contributor like I mentioned to you in the past so I assume that it’s known to people working on the project. Perhaps you should check on that internals mailing list, I checked rust-lang/rust on github but didn’t find anything relevant :)

                                                                                                                          1. 7

                                                                                                                            I checked IRLO (https://internals.rust-lang.org/) and also nothing. (“internals” by the way referring to the “compiler internals”, we have no closed mailing list). The problem on projects of that scale seems to be that information travel is a huge issue and that leads to aggrevation. The reason I’m asking is not that I want to disprove you, I just want to ensure that I don’t open a discussion that’s already happening somewhere just because something is going through social media currently.

                                                                                                                            Thanks for pointing that out, I will ensure there’s some discussion.

                                                                                                                            Reading the linked post, it seems to mostly be a regression when doing the jump between 1.21 to 1.22, so that should probably be a thing to keep an eye out for.

                                                                                                                          2. 2

                                                                                                                            Here’s a current Rust bug that makes life hard for people trying to work on newer platforms.

                                                                                                                      2. 2

                                                                                                                        I’m skeptical; this has certainly worked for me in the past.

                                                                                                                        I used 32 bit lab machines as a place to delegate builds to back when I was a student.

                                                                                                                        1. 4

                                                                                                                          Note that different operating systems will have different address space layout policies and limits. Your effective space can vary from possibly more than 3GB to possibly less than 2GB.

                                                                                                                    1. 1

                                                                                                                      Anyone has a link to the commit or initial bug report/fix?

                                                                                                                      1. 1

                                                                                                                        The bug report (which will probably have the patch, too) is still locked, and the details of the exploit aren’t public yet. My guess is that it’s related to this:

                                                                                                                        Devices with the Play Store, as well as AOpen Chromebase Commercial and AOpen Chromebox Commercial will be rolling out over the next few days.

                                                                                                                        They probably don’t want to release details until everything is patched.

                                                                                                                      1. 5

                                                                                                                        Another APL post that hinges on the “special keyboard”!?

                                                                                                                        Does it really occur to him/her that all the non-English speakers use “special keyboards”?

                                                                                                                        APL’s downfall is not because its symbols, it is because some draconian company still charges thousands of dollars for a small interpreter. https://www-112.ibm.com/software/howtobuy/buyingtools/paexpress/Express?P0=E1&part_number=D50Z7LL&catalogLocale=en_US&Locale=en_US&country=USA&PT=jsp&CC=USA&VP=&TACTICS=&S_TACT=&S_CMP=&brand=SB07

                                                                                                                        1. 4

                                                                                                                          Tangential, but I believe the most vibrant APL community these days is around Dyalog APL rather than the old IBM one, which is still commercial but not quite that exhorbitantly priced (and as of fairly recently, is free for personal/noncommercial use).

                                                                                                                          1. 10

                                                                                                                            There’s also the smaller J community, which is open source.

                                                                                                                            1. 5

                                                                                                                              There’s a project, Co-dfns, that’s implementing a version of the Dyalog language with parallelism built-in, and it’s released under AGPL v3.

                                                                                                                              1. 2

                                                                                                                                The problem lies behind the reason of, and is reinforced by the fact of, companies charging a fortune for an interpreter.

                                                                                                                                Non of the implementations are fully compatible. They all decide to include some extensions that supposedly make their own implementation better, which introduces fragmentation. If somebody’s code works on IBM’s interpreter, it will likely cause problem with other implementations, which makes IBM able to charge whatever amount of money they want.

                                                                                                                                On the other hand, most of the APLers moved on to FORTRAN in the 80’s, for the relatively free compiler and the performance of the compiled machine code from FORTRAN, and the HPC numerical computing communities never looked back to APL.

                                                                                                                                1. 2

                                                                                                                                  If you want a historical or “real” APL, there is MVT4APL - a distribution of OS/360-MVT 21.8F, customized for use with APL\360 Version 1 Modification Level 1. “Real” IBM mainframe APL on Windows or Linux.

                                                                                                                              2. 2

                                                                                                                                A+, openAPL and NARS2000 have been available since the 1980s and many other free implementations exist as well.

                                                                                                                                1. 2

                                                                                                                                  Well, there is GNU APL, and it has existed for some time.

                                                                                                                                1. 1

                                                                                                                                  The one question I have about any cross-platform GUI library is whether it’s using the native platform APIs or doing its own rendering, usually by means of OpenGL (aka “immediate GUI”). I didn’t find an answer on the website. So far, I only know of exactly one real cross-platform C++ GUI library that uses the native APIs, which is wxWidgets. Most libraries I have encountered promise cross-platform and then use the “immediate GUI” approach, which will never look really native and is a battery killer in my experience.

                                                                                                                                  1. 2

                                                                                                                                    Looking at the source for the graphics class and the button class, looks like they’re just drawing their own widgets via Windows APIs that I’m not familiar with or Xlib (the X11 drawing library). Xlib is an interesting choice; most GUI libraries I’ve seen go for OpenGL, which gives you a lot more control, and isn’t slated to be replaced by Wayland. To your point, it might be friendlier on battery than an OpenGL implementation, since it’s not doing any drawing in immediate mode (and instead looks to be repainting into X11 buffers when needed).

                                                                                                                                    Personally, I’ve given up trying to make the GUIs I work on feel native, especially since the GUIs I’m writing are usually for internal tools or for my own consumption. I default to using Dear ImGUI, which makes GUIs that are very utilitarian and look nothing like native, and is super super nice to use as a programmer.

                                                                                                                                    1. 2

                                                                                                                                      My solution is to just write a native UI for each platform you care about. For me, this basically means I’ll just use Windows, and just keep a rudimentary GTK# frontend working, primarily so logic can be split out from views when reasonable.

                                                                                                                                      I’m a big stickler in things feeling like that they belong to the platform. Otherwise, why not write a web app instead?

                                                                                                                                      1. 1

                                                                                                                                        I think it depends on the target audience; usually, I’m writing a GUI for a piece of C++ code that controls a robot or a geometry system or something, and the intended users are either me or the employees of a company I’m consulting for. Writing a web frontend for some chunk of native code is irritating – I either have to jam a web server into a C++ binary, or create wrappers for some scripting language – and using a native toolkit isn’t that important to me, especially because the applications I’m writing usually have some weird widgets in them anyway (2D sliders, or line graphs, or something).

                                                                                                                                        In general, I think that writing a very simple UI can really aide in debugging complex control systems, and I just pick up the tool that makes it as easy as possible to do so. In my experience, that’s an immediate GUI, without callbacks or a dedicated event loop or anything.

                                                                                                                                  1. 4

                                                                                                                                    Ordinarily not a big fan of tire fire of the week stories, but something about this story struck me. Amazing how a series of little screwups quickly turns colossal: asking for private info over email, setting up reply all, users who actually responded…

                                                                                                                                    1. 7

                                                                                                                                      Not a shock that Essential’s first screw up is a security one when your Head of Security is a dog named Cosmo https://www.essential.com/about#team

                                                                                                                                      Rubin seems to be continuing his Android legacy of abysmal security.

                                                                                                                                      1. 3

                                                                                                                                        Oh dear, they’re going to have a ruff time.

                                                                                                                                        1. 3

                                                                                                                                          I think Cosmo may be in charge of physical security.

                                                                                                                                          (which means no one is in charge of device security)

                                                                                                                                          1. 1

                                                                                                                                            Part of the joy of networked devices is I can physically control them from a remote location. Remotely controlled physical switches are great until you have to travel to actually fix them :~)