Threads for robgssp

  1. 5

    I think this is more a wishful thinking that a solid plan for the future, as there is nothing concrete given on how to achieve a long term success. Also, when looking how our interaction with the technology evolves, wouldn’t the language of the future look more like a natural language then something full of weird symbols (=<<, $, …). I agree that Haskell brings a lot of great stuff, but the tooling is far, far away from perfect and with the amount of extensions, it resembles the natural language in the sense that there are so many different dialects, that might hamper the understanding.

    1. 9

      Why would natural language be selected for? Plenty of older languages were more wordy (Pascal, Cobol, SML to an extent) and they’ve been mostly replaced with more symbol-heavy languages today.

      1. 1

        Given how our interactions with computers change, that is my interpretation. I might be wrong, and time will tell :)

        I don’t think current languages are symbol heavy, at least not as much as Haskell is. There are some special languages like APL or Perl family, but most of the mainstream languages are pretty sane in that regard (Python, Java, Go, …).

    1. 3

      So, I think that a lot of the contention here is coming from a confusion between tasks that need to be done, and complexity that arises from completing a task in a particular place. If you don’t complete a task in one layer of software, it gets pushed either to a different layer, or to the user. Solving the problem at different levels is not usually equally complex, though. It can be appear to be, sometimes, but let me tell you a story that shows how big of a difference solving a task in the right place can make.

      At a previous job, we were working on a special-purpose accelerator with statically allocated, compiler-managed caches. What that means is that the program’s memory access pattern needs to be deduced by the compiler ahead of time, so that the cache can be filled with just the right amount of data. If you over-size, you run out of cache and the compile fails, and if you under-size the program crashes from an unfulfilled memory access. So, it’s important to get right. It’s also really hard. In general it’s impossible (equivalent to the halting problem, etc), and even for our very-restricted set of array processing programs, it was a continual source of problems. It turns out that operations like, say, rotating an image by an arbitrary angle, are pretty useful. It also turns out that translating that rotation into linear array scans to fill the cache requires quite a bit of cleverness. We did get it working in the end, but it was a whole lot of special-purpose algorithm awareness living in the compiler. The compiler definitely could not be described as general-purpose.

      Now, this toolchain could have perhaps been designed differently, to expose the cache allocation to the user. That way they would be the ones having to describe their data access pattern to the hardware. The fact remains, though, that so long as the caches were explicitly program-filled, something, be it the user or the compiler, had to have a detailed understanding of the program’s data access pattern in order to lay out a cache-filling and execution sequence that satisfies it.

      Contrast this with the usual solution, hardware-managed caches. The cache-filling algorithm of, say, your computer’s L2 cache, doesn’t need a detailed understanding of your program’s execution path. Neither does the CPU, the compiler, or, frequently, the programmer! So where did all that complexity go? Remember, our special-purpose compiler needed all that code and complexity to determine a program’s data access pattern because that’s what the hardware demanded. So in a general-purpose computer, are the caches way more complicated to accommodate the simplicity of programming on top of them? Well, no, not really. An LRU cache replacement algorithm is dead-simple to write. Obviously modern hardware caches are a bit more complicated than that, but they’re still not leaps-and-bounds more complicated than our accelerator’s caches-plus-fill-pattern-generator.

      So in our special-purpose accelerator, we were doing the same tasks as a normal CPU would: we picked which data were to reside in the cache and which to evict. The difference in complexity, though, was between a “normal” cache which is basically invisible to the programmer, modulo performance effects; and ours, which was a constant limitation throughout the software stack, including to our end users. That complexity, in a normal system, doesn’t reside further up in the software stack, or in user behavior, or any of that: it simply doesn’t exist. The task is fulfilled more simply.

      So why did we make all this extra work for ourselves? Because it made our accelerator faster, of course! You could argue that without that, we would have had to optimize somewhere else, which would have its own attendant complexity. This is true. However, the attendant complexity of, say, a vector math offload accelerator that does not try to gain a deep understanding of the whole program, is still leaps and bounds less complex than our system, while achieving similar goals.

      The project was eventually cancelled, largely because our users so hated dealing with our toolchain’s issues that they went back to running on the CPU. Many of those issues did involve cache allocation, in conjunction with some new user code that accessed memory in some slightly different pattern than we’d seen before. I think if you described their lives as simple, thanks to all the complexity we hid away in the compiler, they’d have some choice words for you!

      1. 3

        …the nodes along the fiber network were so flooded, they could not be reached by their administrators to troubleshoot the issue…

        Does CenturyLink not reserve some bandwidth or have a dedicated network to access their servers, just in case normal bandwidth is saturated for some reason? I’m not a network engineer or system administrator, so correct me if I’m wrong, but this seems like something that would be fairly easy to do.

        1. 8

          The storm was over their own management link, so I think this was their dedicated control network. As to why malformed packets would ever be forwarded onto their management network as they apparently describe, that’s a mystery to me.

          1. 2

            It would seem that the whole issue is that these packets were not “malformed”, hence, they continued to be forwarded around.

            How on earth were they not malformed is a better question. In IPv4 and IPv6, the byte-sized time-to-live (TTL) field is a very standard feature precisely for this reason, and it’ll ensure that these loops aren’t really possible, or, at worst, are self-constrained. Why are they using a protocol without such basic reliability features is, perhaps, the root question I’d have here.

            1. 2

              I was interested in this part too, so I checked the FCC report (linked in the article). Three interesting things I found there - this is a proprietary protocol created by the vendor of the affected nodes, the vendor wasn’t able to explain how the packets were generated and they don’t know which node generated them, and the report says ‘At the time of the outage, the affected network used nodes supplied by Infinera Intelligent Transport Networks (Infinera)’. I don’t want to read too much into that but it sounds like they might not be using that protocol any more.

              1. 2

                I’d be interested in what, if any, benefits this proprietary protocol had over what is commonly available but there is unlikely to be any detailed write up on that.

            2. 1

              Good catch, I didn’t read the article carefully enough.

          1. 0

            the premise of this article is nonsense

            “echo” doesnt expect and doesnt accept standard input

            i get that the point is to test race conditions or some such, but any kind of conclusion they are trying to make is moot as far as im concerned if they cant come up with even one realistic example.

            1. 6

              This comic about the mantis shrimp has absolutely no practical impact on my life. But I still find it incredibly interesting, and I’d love to see one in real life (behind a thick pane of glass).

              I didn’t think the point was to test race conditions. I thought the point was that it was interesting and unusual behavior that I wouldn’t expect and that made me think. Can’t that be enough?

              (Edit: just want to point out that since I originally viewed this article the author added some theories. But I still stand by my point, honestly.)

              1. 5

                So here’s another nonsense thing.

                fd = open("file");
                ptr = mmap(fd);
                write(fd, ptr, 4096);
                

                But why would you do that? It’s dumb. It’s not realistic. It doesn’t do anything. It’s ridiculous.

                It also has a tendency to deadlock certain systems.

                1. 2

                  That’s interesting, too, then. What does it deadlock, just the user mode process? And which systems? Maybe it could have implications for sandboxed environments like a browser’s javascript engine? I wouldn’t immediately dismiss curious trivia like this offhand.

                2. 4

                  I think the point isn’t to raise a practical issue so much as to ask “what are the semantics of bash/unix pipes?” From that perspective the example doesn’t need to be realistic, and trying to flesh out their minimal example into something more “real” would probably just obscure the actual question. See cks’s answer: there’s some legitimately important nuance as to why this example behaves as it does.

                  1. 1

                    It points out that other than “try it and see” you can’t necessarily know what will happen in Shell.

                    1. 1

                      It doesn’t say echo reads from stdin, it says it expects its stdout to be read from.

                      1. 0

                        the first example given is

                        (echo red; echo green 1>&2) | echo blue
                        

                        this is ridiculous because you would never use syntax like this ever. firstly you would use “{}” not “()” to avoid the subshell - second you dont pipe to “echo” because thats pointless. it doesnt do anything, as “echo” doesnt accept standard input.

                        1. 5

                          I disagree, I think it’s interesting despite “|echo” not being a useful construct in itself. It works well as a mechanism for highlighting an interesting interaction with pipes, blocking IO, buffers, signals and forked child processes.

                    1. 2

                      for all the diligence required to solve this sort of problem, you’d think that would start pushing programming more towards, ya know, engineering as a way of building things, but at least its a cool story!

                      1. 2

                        As it was a hardware issue in this case, I’m not sure I understand what you’re saying. Do you mean that if, say, the software was verified and guaranteed not to crash, they would have immediately diagnosed the crash as a hardware issue, thus saving a lot of time?

                        1. 3

                          In general, you can do that sort of thing in Design-by-Contract with a certified and non-certified compiler. In debug mode, the contracts can all become runtime checks showing you exactly which module fed bad input into the system. That lets you probe around to localize exactly which bad box took input that accepted its preconditions, did something with it, and produced output that broke the system. When looking at that module, it will normally be a software bug. However, you can run all failing tests through both a certified and regular binary to see if the failure disappears in one. If it does, it’s probably a compiler error. Similar check running sequential vs concurrent in case it’s a concurrency bug. Similarly, if the logic makes sense, it’s not concurrency, and passes on certified compiler, it’s probably an error involving something else reaching into your memory or CPU to corrupt it. That’s going to be either a hardware fault or a privileged component in software. With some R&D, I think we could develop for those components techniques for doing something similar to DbC in software for quickly isolating hardware faults.

                          That said, I don’t think it was applicable in this specific case. They’d have not seen that problem coming unless they were highly-experienced, embedded engineers. I’ve read articles from them where they look into things like effects of temperature, bootloader starting before PLL’s sync up, etc. Although I can’t find link, this isn’t the first time sunlight has killed off a box or piece of software. I’ve definitely seen this before. I think a friend might have experienced it with a laptop, too. We might add to best practices for hardware/software development to make sure the box isn’t in sunlight or another situation that can throw its operating temperature. I mean, PC builders have always watched that a bit but maybe the developers on new hardware should ensure it’s true by default. The hardware builders should also test the effects of direct sunlight or other heat to make sure the boxes don’t crash. Some do already.

                          1. 3

                            However, you can run all failing tests through both a certified and regular binary to see if the failure disappears in one. If it does, it’s probably a compiler error.

                            I don’t think that’s true, at least in C. I know CompCert at least takes “certified” to mean “guarantees well-defined output for well-defined input”, so it’s free to make hash of any UB-containing code the same as Clang.

                            That said, if your test results change between any two C compilers, it’s a strong suggestion you have a UB issue.

                            1. 2

                              it’s a strong suggestion you have a UB issue.

                              True, too. There’s teams out there that test with multiple compilers to catch stuff like that. OpenBSD folks mentioned it before as a side benefit of cross-platform support.

                              1. 2

                                That said, if your test results change between any two C compilers, it’s a strong suggestion you have a UB issue.

                                In C, this can also mean that you depend on implementation-defined bahaviour or unspecified bahaviour which are not the same as undefined behaviour (which often will also be a bad thing ;)).

                              2. 2

                                Are you proposing gamedev companies should adopt that kind of techniques?

                                1. 3

                                  Im always pushing all programmers to adopt anything that helps them that they can fit in their constraints. Aside from Design-by-Contract, I’d hold off on recommending game developers do what I described until the tooling and workflow are ready for easy adoption. Ive got probably a dozen high-level designs for it turning around in my head trying to find the simplest route.

                                  One thing Im sure about is using C++ is a drawback since it’s so hard go analyze. About everything I do runs into walls of complexity if the code starts in C++. Still working on ideas for that like C++-to-C compilers or converting it into equivalent, easier-to-analyze language that compiles to C (eg ZL in Scheme). So, I recommend avoiding anything as complex as C++ if one wants benefits from future work analyzing either C or intermediate languages.

                                  Edit: Here was a case study that found DbC fit game dev requirements.

                                  1. 2

                                    The other day there was a link to a project that does source-to-source transformation on C++ code to reduce the level of indirection: https://cppinsights.io

                                    1. 2

                                      Try doing an exhaustive list of FOSS apps for C and C++ doing this stuff. You’ll find there’s several times more for C which also collectively get more done. There’s also several certifying compilers for C subsets with formal semantics for C++ still barely there despite it being around for a long time.

                                      So, that’s neat but one anecdote goimg against a general trend.

                                    2. 1

                                      Im always pushing all programmers to adopt anything that helps them that they can fit in their constraints.

                                      This is meaningless - is there someone who doesn’t?

                                      Aside from Design-by-Contract, I’d hold off on recommending game developers do what I described until the tooling and workflow are ready for easy adoption.

                                      So the only one thing you propose would not help at all with problem with overheating console or with performance regression from that post ;)

                                      One thing Im sure about is using C++ is a drawback since it’s so hard go analyze.

                                      Sure, it’s hard but there are tools that can do some sort of static analysis for it (for example Coverity or Klocwork). Either way, there are no alternatives today for c++ as language for engine that can be used for AAA games.

                                      Here was a case study that found DbC fit game dev requirements.

                                      Have you actually read it? I have nothing against DbC, but as far as I can see that study doesn’t really show any great benefits of DbC nor is it realistic. They do show that writing assertions for pre/post conditions and class invariants helps in diagnosing bugs (which is obvious), but not much more.

                                      They don’t show that really hard bugs are clearly easier to diagnose and fix with DbC, nor do they show that cost/benefit ratio is favourable for DbC.

                                      Finally, that paper fails to describe in detail how was the experiment conducted - all I could gather is this:

                                      Implementation took approximately 45 days full-time and led to source code consisting of 400 files.

                                      code was predominantly implemented by one person

                                      Even it was an interesting paper (which imo it is not) it’s impossible to try and replicate it independently.

                                      1. 1

                                        “This is meaningless - is there someone who doesn’t?”

                                        Yes, most developers don’t if you look at the QA of both proprietary and FOSS codebases. I mean, every study done on formal specifications said developers found them helpful. Do you and most developers you see use them? Cuz I swore Ive been fighting an uphill battle for years even getting adoption of consistent interface checks and code inspection for common problems.

                                        “but as far as I can see that study doesn’t really show any great benefits of DbC nor is it realistic. They do show that writing assertions for pre/post conditions and class invariants helps in diagnosing bugs (which is obvious)”

                                        It’s so “obvious” most developers aren’t doing DbC. What it says is that DbC fits the requirements of game developers. Most formal methods don’t. It also helped find errors quickly. It’s not mere assertions in the common way they’re used: it can range from simple Booleans to more complex properties. One embedded project used a whole Prolog. Shen does something similar to model arbitrary, type systems so you can check what you want to. Finally, you can generate tests directly from contracts and runtime checks for combining with fuzzers taking one right to the failures. Is that standard practice among C developers like it has been in Eiffel for quite a while? Again, you must be working with some unusually-QA-focused developers.

                                        “would not help at all with problem with overheating console “

                                        First part of my comment was about general case. Second paragraph said exactly what you just asked me. Did you read it?

                                        “Even it was an interesting paper (which imo it is not) it’s impossible to try and replicate it independently.”

                                        The fact that it used evidence at all would put it ahead of many programming resources that are more like opinion pieces. Good news is you don’t have to replicate it: you can create a better study that tries same method against same criteria. Then, replicate that. If that’s the kind of thing you want to do.

                                        1. 1

                                          I mean, every study done on formal specifications said developers found them helpful. Do you and most developers you see use them?

                                          Academic studies show one thing, while practitioners for unknown reasons do not adopt practices recommended by academics. Maybe, the studies are somehow flawed. Maybe the gains recommended in studies don’t have good roi for most of gamedev industry?

                                          What it says is that DbC fits the requirements of game developers. Most formal methods don’t. It also helped find errors quickly. It’s not mere assertions in the common way they’re used

                                          The study was using “mere assertions in the common way they’re used” so I don’t know what is the point of rest of that paragraph - techniques you mention there are not even mentioned in that paper so there is no proof of their applicability to gamedev.

                                          First part of my comment was about general case. Second paragraph said exactly what you just asked me.

                                          I asked you about your recommendations for gamedevs not about some unconstrained general case and just pointed out that your particular recommendation would not help in case from the story - nothing more :)

                                          Did you read it?

                                          Sure I have, but to make it easier in future try to make your posts more succinct ;)

                                          1. 1

                                            Academic studies show one thing, while practitioners for unknown reasons do not adopt practices recommended by academics.

                                            I agree in the general case. Except that most practitioners trying some of these methods get good results. Then, most other practitioners ignore them. Like CompSci has it’s irrelevant stuff, the practitioners have their own cultures of chasing some things that work and some that don’t. However, DbC was deployed to industrial practice via Eiffel and EiffelStudio. The assertions that are a subset of it have obvious value. SPARK used them for proving absence of errors. Now, Ada 2012 programmers are using it as I described with contract-based testing. Lots of people are also praising property-based testing and fuzzing based on real bugs they’re finding.

                                            So, this isn’t just a CompSci recommendation or study: it’s a combo of techniques that each have lots of industrial evidence they work and supporters that work well together that also have low cost. With that, either mainstream programmers don’t know about them or they’re ignoring effective techniques despite evidence. The latter is all too common.

                                            “I asked you about your recommendations for gamedevs not about some unconstrained general case”

                                            What helps programming in general often helps them, too. That’s true in this case.

                                            “Sure I have, but to make it easier in future try to make your posts more succinct ;)”

                                            Oh another one of you… Haha.

                                            1. 2

                                              Lots of people are also praising property-based testing and fuzzing based on real bugs they’re finding.

                                              Those techniques are not what I would call “formal specifications” any more than simplest unit tests are, but if you consider them as such, than…

                                              either mainstream programmers don’t know about them or they’re ignoring effective techniques despite evidence. The latter is all too common.

                                              … I have no studies to back this up, but my experience is different. DbC (as shown in that study you linked), property based and fuzzing based testing are techniques that are used by the working programmers. Not for all the code, and not all the time but they are used.

                                              When I wrote about studies showing one thing and real life showing something opposite I was thinking about methods like promela, spin or coq.

                                              1. 1

                                                Makes more sense. I see where you’re going with Coq but Spin/Promela have lots of industrial success. Similar to TLA+ usage now. Found protocol or hardware errors easily that other methods missed. Check it out.

                            1. 7

                              [C is] still the fastest compiled language.

                              Well hold on there. This claim might be true when you’re working with small critical sections or microbenchmarks, but when you have a significant amount of code all of which is performance-sensitive, my experience is that’s way off-base. I work with image pipelines these days, which are characterized by a long sequence of relatively-simple array operations. In the good ol’ days this was all written in straight C, and if people wanted stages to be parallelized or vectorized or grouped or GPU offloaded or [next year’s new hotness] they did it themselves. The result, as you might guess, is a bunch of hairy bug-prone code that’s difficult to adapt to algorithm changes.

                              Nowadays there’s better options, such as Halide. In Halide you write out the computation stages equationally, and separately annotate how you want things parallelized/GPU offloaded/etc. The result is your code is high-level but you retain control over its implementation. And unlike in C, exploring different avenues to performance is pretty easy, and oh by the way, schedule annotations don’t change algorithm behavior so you know the optimizations you’re applying are safe.

                              Check out the paper for demonstrations of Halide’s effectiveness: using an automatic schedule search, they manage to beat a collection of hand-optimized kernels for real-world tasks, in a fraction of the code size. Also from personal experience, our project might not have even gotten off the ground without Halide, but with it we’ve been able to port a large camera pipeline with relative ease.

                              My point is providing primitives only nets you performance until you hit the cognitive ceiling of your programmers. All those high-performance schedules could’ve been coded up in C and Cuda, but they weren’t because at some point all the parts got too complicated for the experts to hand-code optimally. And if no one can manage to actually get high performance out of your language/system, is it really high performance? So I think in order for a language to stay high-performance in the presence of complex programs, it needs to be high-level. C and intrinsics just aren’t good enough.

                              Edit: for a great example, check out the snippets on the second page here.

                              1. 26

                                Something clearly got this author’s goat; this rebuttal feels less like a reasoned response and more like someone yelling “NO U” into Wordpress.

                                Out of order execution is used to hide the latency from talking to other components of the system that aren’t the current CPU, not “to make C run faster”.

                                Also, attacking academics as people living in ivory towers is an obvious ad hominem. It doesn’t serve any purpose in this article and, if anything, weakens it. Tremendous amounts of practical CS come from academia and professional researchers. That doesn’t mean it should be thrown out.

                                1. 10

                                  So, in context, the bit you quote is:

                                  The author criticizes things like “out-of-order” execution which has lead to the Spectre sidechannel vulnerabilities. Out-of-order execution is necessary to make C run faster.

                                  The author was completely correct here, and substituting in JS/C++/D/Rust/Fortan/Ada would’ve still resulted in a correct statement.

                                  The academic software preference (assuming that such a thing exists) is clearly for parallelism, for “dumb” chips (because computer science and PLT is cooler than computer/electrical engineering, one supposes), for “smart” compilers and PL tricks, and against “dumb” languages like C. That appears to be the assertion the author here would make, and I don’t think it’s particularly wrong.

                                  Here’s thing though: none of that has been borne out in mainstream usage. In fact, the big failure the author mentioned here (the Sparc Tx line) was not alone! The other big offender of this you may have heard of is the Itanic, from the folks at Intel. A similar example of the philosophy not really getting traction is the (very neat and clever) Parallax Propeller line. Or the relative failure of the Adapteva Parallela boards and their Epiphany processors.

                                  For completeness sake, the only chips with massive core counts and simple execution models are GPUs, and those are only really showing their talent in number crunching and hashing–and even then, for the last decade, somehow limping along with C variants!

                                  1. 2

                                    One problem with the original article was that it located the requirement for ILP in the imagined defects of the C language. that’s just false.

                                    Weird how nobody seems to remember the Terra.

                                    1. 3

                                      In order to remember you would have to have learned about it first. My experience is that no one who isn’t studying computer architecture or compilers in graduate school will be exposed to more exotic architectures. For most technology professionals, working on anything other than x86 is way out of the mainstream. We can thank the iPhone for at least making “normal” software people aware of ARM.

                                      1. 4

                                        I am so old, that I remember reading about the Tomasula algorithm in Computer Architecture class and wondering why anyone would need that on a modern computer with a fast cache - like a VAX.

                                      2. 1

                                        For those of us who don’t, what’s Terra?

                                        1. 2

                                          Of course, I spelled it wrong.

                                          https://en.wikipedia.org/wiki/Cray_MTA

                                    2. 9

                                      The purpose of out of order execution is to increase instruction-level parallelism (ILP). And while it’s frequently the case that covering the latency of off chip access is one way out of order execution helps, the other (more common) reason is that non-dependent instructions that use independent ALUs can issue immediately and retire in whatever order instead of stalling the whole pipeline to maintain instruction ordering. When you mix this with good branch prediction and complex fetch and issue logic, then you get, in effect, unrolled, parallelized loops with vanilla C code.

                                      Whether it’s fair to say the reasoning was “to make C run faster” is certainly debatable, but the first mainstream out of order processor was the Pentium Pro (1996). Back then, the vast majority of software was written in C, and Intel was hellbent on making each generation of Pentium run single-threaded code faster until they hit the inevitable power wall at the end of the NetBurst life. We only saw the proliferation of highly parallel programming languages and libraries in the mainstream consciousness after that, when multicores became the norm to keep the marketing materials full of speed gainz despite the roadblock on clockspeed and, relatedly, single-threaded performance.

                                      1. 1

                                        the first mainstream out of order processor was the Pentium Pro (1996).

                                        Nope.

                                    1. 19

                                      “Your computer has plenty of RAM, so my app can be inefficient” gets a little irritating when I have to run ten other apps that did the exact same thing.

                                      1. 15

                                        As Niklaus Wirth said: “…we do not consider it as good engineering practice to consume a resource lavishly just because it happens to be cheap.”

                                        1. 2

                                          That’s not really true though. If your design balances the value you put on performance with, say, safety or debuggability, then performance is bumped by a factor of 10,000 (roughly 20 years of Moore’s Law progress), it makes sense to spend performance budget on the other concerns. To do otherwise is no more “good engineering practice” than saving weight on a bridge by not putting guard rails on it.

                                          Which is not to say I think Electron is a good or acceptable trade-off. If Javascript were less error-prone than its native competition it might’ve been, but it seems to be the opposite. Which means, I think, that Electron quite literally is a waste of resources.

                                          1. 5

                                            As a particular point, JavaScript may be a safe language (and perhaps node + qt would be a good framework), but the browser exposes a shit ton of attack surface and unexpected legacy behavior. I know we all love our lisp code is data and data is code, but it’s a poor model for reliable computing when the barrier between the code in the document and the code about the document is very thin.

                                            1. 5

                                              Sure, and I don’t think Wirth would object. It’s the “lavish” part: Vala and GTK is just as if not more portable than Electron, produces faster and smaller code, is type-safe, works well with debuggers, works with the platform’s accessibility tools (disabled users often find Electron apps literally impossible to use), has a consistent UI (consistency between apps is another big Electron problem), etc.

                                              Why bring along an entire web runtime that consumes tens of megs of memory if you could get the same or more bang for the buck with less?

                                              (Note that I’m using Vala as an example; this would apply equally to any other desktop framework like Qt, etc.)

                                        1. 10

                                          Any post that calls electron ultimately negative but doesn’t offer a sane replacement (where sane precludes having to use C/C++) can be easily ignored.

                                          1. 10

                                            There’s nothing wrong with calling out a problem even if you lack a solution. The problem still exists, and brining it to people’s attention may cause other people to find a solution.

                                            1. 8

                                              There is something wrong with the same type of article being submitted every few weeks with zero new information.

                                              1. 1

                                                Complaining about Electron is just whinging and nothing more. It would be much more interesting to talk about how Electron could be improved since it’s clearly here to stay.

                                                1. 4

                                                  it’s clearly here to stay

                                                  I don’t think that’s been anywhere near established. There is a long history of failed technologies purporting to solve the cross-platform GUI problem, from Tcl/tk to Java applets to Flash, many of which in their heydays had achieved much more traction than Electron has, and none of which turned out in the end to be here to stay.

                                                  1. 2

                                                    I seriously doubt much of anything, good or bad, is here to stay in a permanent sense

                                                    1. 2

                                                      Thing is that Electron isn’t reinventing the wheel here, and it’s based on top of web tech that’s already the most used GUI technology today. That’s what makes it so attractive in the first place. Unless you think that the HTML/Js stack is going away, then there’s no reason to think that Electron should either.

                                                      It’s also worth noting that the resource consumption in Electron apps isn’t always representative of any inherent problems in Electron itself. Some apps are just not written with efficiency in mind.

                                                2. 5

                                                  Did writing C++ become insane in the past few years? All those GUI programs written before HTMLNative5.js still seem to work pretty well, and fast, too.

                                                  In answer to your question, Python and most of the other big scripting languages have bindings for gtk/qt/etc, Java has its own Swing and others, and it’s not uncommon for less mainstream languages (ex. Smalltalk, Racket, Factor) to have their own UI tools.

                                                  1. 4

                                                    Did writing C++ become insane in the past few years? All those GUI programs written before HTMLNative5.js still seem to work pretty well, and fast, too.

                                                    It’s always been insane, you can tell by the fact that those programs “crashing” is regarded as normal.

                                                    In answer to your question, Python and most of the other big scripting languages have bindings for gtk/qt/etc, Java has its own Swing and others, and it’s not uncommon for less mainstream languages (ex. Smalltalk, Racket, Factor) to have their own UI tools.

                                                    Shipping a cross-platform native app written in Python with PyQt or similar is a royal pain. Possibly no real technical work would be required to make it as easy as electron, just someone putting in the legwork to connect up all the pieces and make it a one-liner that you put in your build definition. Nevertheless, that legwork hasn’t been done. I would lay money that the situation with Smalltalk/Racket/Factor is the same.

                                                    Java Swing has just always looked awful and performed terribly. In principle it ought to be possible to write good native-like apps in Java, but I’ve never seen it happen. Every GUI app I’ve seen in Java came with a splash screen to cover its loading time, even when it was doing something very simple (e.g. Azureus/Vuze).

                                                    1. 1

                                                      Writing C++ has been insane for decades, but not for the reasons you mention. Template metaprogramming is a weird lispy thing that warps your mind in a bad way, and you can never be sane again once you’ve done it. I write C++ professionally in fintech and wouldn’t use anything else for achieving low latency; and I can’t remember the last time I had a crash in production. A portable GUI in C++ is so much work though that it’s not worth the time spent.

                                                    2. 1

                                                      C++ the language becomes better and better every few years– but the developer tooling around it is still painful.

                                                      Maybe that’s just my personal bias against cmake / automake.

                                                  1. 6

                                                    Don’t want overcommit? Turn it off.

                                                    me@host$ head -n3 /proc/meminfo 
                                                    MemTotal:       32815780 kB
                                                    MemFree:        13309168 kB
                                                    MemAvailable:   19461460 kB
                                                    me@host$ cat /proc/sys/vm/overcommit_memory 
                                                    0
                                                    me@host$ python -c 'import os; s = (12 << 30) * "."; print s[:3]; os.fork()'
                                                    ...
                                                    Traceback (most recent call last):
                                                      File "<string>", line 1, in <module>
                                                    OSError: [Errno 12] Cannot allocate memory
                                                    
                                                    1. 1

                                                      Don’t want overcommit? Turn it off.

                                                      Just don’t do it in production, because sooner or later you’ll come to understand why it’s the default.

                                                      1. 5

                                                        Sure, and I agree that it’s an appropriate default for most common-case systems (though there are legitimate reasons to want to disable it). My intent was basically “learn your system’s configuration options instead of griping about its defaults”.

                                                        1. 10

                                                          The issue is fork makes no-overcommit unreasonable. With a spawn model you don’t see worst-case memory usage balloon every time you start a program.

                                                    1. 3

                                                      I’m sure this is a minority opinion, but it would be nice if it were easy to opt-out of these changes.

                                                      For my home machines I’m not concerned about the security risk, and would rather have the better performance.

                                                      1. 5

                                                        It looks like the pti=off flag should get the old behavior back.

                                                        1. 2

                                                          I’m not concerned about the security risk

                                                          we don’t yet know what are the security risks.

                                                          1. 7

                                                            Shared computers are more shared. :)

                                                            1. 1

                                                              Well, we know it involves user processes reading kernel memory, and I’m confident that I’m not running any malicious user processes that are attempting to do so.

                                                              And the real issue is almost certainly not as bad as the scare mongering in The Register’s article.

                                                          1. 1

                                                            Why is the send bandwidth so much lower than receiving? Is the send just lower powered so it’s disproportionately effected?

                                                            1. 6

                                                              ADSL is always biased towards downstream transmission (The “A” stands for asymmetric, there is also SDSL.)

                                                              For ADSL2 Annex A, the theoretical maximums are 12Mbit down and 1.3Mbit up so they’re getting 28% of max downstream and 5% of max upstream. If it’s actually ADSL2+, they’re getting 14% of max downstream.

                                                              My guess is that the percentages are so unequal because the wet string’s frequency response is not flat(!) ADSL only has one transmission medium (the phone line) so it splits concurrent upstream & downstream communications by frequency. Upstream gets the lower frequencies.

                                                              There’s a good table on Wikipedia: https://en.wikipedia.org/wiki/Asymmetric_digital_subscriber_line#ADSL_standards

                                                            1. 3

                                                              Small correction: 128-bit CAS is supported on all(?) x64 processors. I used it for a lock-free hash table one time, since 16 character keys should be enough for anybody :).

                                                              1. 1

                                                                Thanks for the correction!

                                                              1. 4

                                                                Guy runs into two bugs involving stale data and concludes every app must have a refresh button? That seems a bit presumptuous.

                                                                1. 18

                                                                  That’s kind of reductive. The message I got from the post was “please provide users this simple and well-known way to work around synchronization bugs“. It’s usually easy to implement and has no drawbacks save for a tiny bit of UI clutter.

                                                                  1. 1

                                                                    Okay, I see your point. I’m partial to the pull-to-refresh affordance since it stays out of the way the 95% of time you don’t need it. In apps that aren’t list-based, however, I don’t know what you’d do (shake-to-refresh?).

                                                                    I think I was irked by “every app that doesn’t have this is broken” and the assumption that the web-based paradigm of throwing everything out and getting a fresh copy is the best one for every app. In some apps, it’d be the equivalent of an elevator door close button, where hitting it doesn’t really do anything it isn’t already doing.

                                                                    I also think the real issue is how phones react to (or don’t) changes in network connectivity.

                                                                  2. 8

                                                                    The author uses two examples to illustrate the fact that every application which synchronizes can get desynchronized (seriously, every single one) and providing users with a workaround is essential.

                                                                    1. 5

                                                                      Going to have to agree with OP. Sometimes a refresh button doubles as an invaluable debugging tool.

                                                                      1. 4

                                                                        Does it? It seems, at best, like a way to work around bugs by destroying the state you probably need to debug the problem.

                                                                        1. 9

                                                                          As a user, what does destroying debug data matter to me? I’m not trying to debug your app, I’m trying to call a taxi. Further, how does omitting the refresh trigger help debugging? Most people don’t file bug reports, they just get annoyed, and denying them the tools to fix their problem will just make them more annoyed.

                                                                          If you want debug information, I’d argue a refresh button could be actively helpful: users press it when they think something’s stale, so by collecting telemetry on refresh presses you can get a clue as to when desynchronizations happen.

                                                                          1. 2

                                                                            I’m not asserting that you shouldn’t have one, I just don’t think it’s a debugging tool.

                                                                          2. 1

                                                                            Provide a refresh button in release, but not debug so that developers can’t use it to ignore real bugs :)

                                                                            1. 1

                                                                              Just because I have a hammer doesn’t mean I just go around hitting everything like it’s a nail…

                                                                          3. 1

                                                                            Even Gmail on Android allows the user to manually refresh despite it being timely, correct, and not error prone in 2017. I give some odds that the refresh action doesn’t do anything except address the user experience.

                                                                            Not defending buggy apps, but there is a take-away here.

                                                                          1. 4

                                                                            So I’ve only spent like two minutes looking, but I think you can also do pattern matching with switch(variant.index()) if you like.

                                                                            1. 6

                                                                              That would be non-exhaustive at compile-time

                                                                              1. 3

                                                                                What the author’s going for is selecting an alternative and getting the value out should be the same operation, so you can’t hit a runtime type error between checking the alternative and unboxing it.

                                                                                1. 1

                                                                                  A guarantee that the type won’t change in a concurrent environment is going to require a lot more machinery I think. There’s accessors to get pointers to the value. And in other cases, the value itself may be a pointer. If the type can change at any time, those pointers will invalidated. If you want the borrow checker, it’s over there I guess, but I didn’t really get the sense that the author wants atomic selectors.

                                                                                  1. 3

                                                                                    I was unclear there. By “the same operation,” I just mean that selecting and unboxing shouldn’t be separate points in the code. I’m not about to argue that variant needs to be concurrent.

                                                                                    What I mean is just that in

                                                                                    select (boink.index()) {
                                                                                    case 0: int val = get<0>(boink); 
                                                                                        ...
                                                                                    }
                                                                                    

                                                                                    you’re mentioning the index twice, and if you mistype one or the other you’ll only find out at runtime.

                                                                                    1. 1

                                                                                      Oh, I see. Yeah, you could do switch ((idx = boink.index())) (always another workaround) but it is backtreading.

                                                                                2. 2

                                                                                  Then you’re writing exactly the same code as you would with a tagged union, except it’s actually worse because you don’t get -Wswitch. (warning: enumeration value ‘ASDF’ not handled in switch [-Wswitch])

                                                                                  1. 2

                                                                                    Well, it’s solve the problem that the tag and the value diverge because you forget to update one. Guess you get to choose which is worse. It seems the return type of index could be an enum (though obviously they didn’t do that). Not sure how much more difficult that would have been.

                                                                                1. 4

                                                                                  During college I got a job at the research computing group, optimizing a researcher’s Fortran code from single-core to parallel and distributed. The code was some kind of field propagation across a 3d grid, and the main loop was what you might expect: each cell takes its new value from it and its neighbors’ old values. In parallelizing this I found something interesting, though: this was being done in-place, so three of the point’s six neighbors would be from the current iteration and three would be from the previous, so the graph would end up skewing towards one quadrant. This was pretty obviously broken, and the grad student who wrote it hadn’t noticed. I passed word and the optimized/fixed version back to him, so I hope they updated the results for whatever project they were working on.

                                                                                  The moral of the story being yes, code reviews for scientific codes would be super helpful. I would wager this sort of problem isn’t uncommon, and they don’t all get caught before release.

                                                                                  1. 1

                                                                                    It’d be really neat to be able to tune into altitude data, if it were available in the data set. I’m not sure what the best option is for trying to express that dimension of information on a 2D canvas though. At a fixed altitude, as the author mentions, vertical winds are simply expressed as low or zero movement, and tend to indicate what looks like shear boundaries for converging fronts.

                                                                                    Maybe a depth-fade render would look nice, with particles fading into high saturation and brightness as they move toward the viewer, and out to transparent (but not completely invisible) as they move toward the ground?

                                                                                    1. 1

                                                                                      Windy gives altitude data, though it just does it by letting you select a given altitude to slice. What would be really neat is if nullschool let you rotate the globe and showed 3d airmasses.

                                                                                    1. 7

                                                                                      This really seems like much ado over nothing to me. If you read the initial message, all he’s actually saying is that init’s become more complicated so you can no longer just set a low stack size on it and be sure it’ll never overflow. “Linus hates systemd” seems like a stretch, and I think is mostly from people taking that “sane thing” quote out of context.

                                                                                      1. 1

                                                                                        The article isn’t saying Linus hates systemd. It’s saying Linus doesn’t trust the systemd developers and considers their tactics to be harmful to his kernel. And it’s not about that one incident it’s about a pattern of incidents where Linus has dealt with fallout from systemd developers. The article is actually pretty balanced and paints Linus as pretty balanced as well.

                                                                                      1. 10

                                                                                        Fun article, but the author has a skewed definition of “firmware”. If you look at those controllers sitting in engines or satellites, “no instructions in it that aren’t pertinent to the job at hand” isn’t what you’ll find. Satellite controllers are usually an RTOS with preemptive threading and all that “fanciness”. Some of them run Lisp. And Linux supports this sort of environment, with the RT patchset.

                                                                                        Now, if you want to argue that there’s no way a bunch of VC’d twentysomethings should be trusted to write real-time reliable code for a fire alarm, that’s a fair argument. The failure mode of the alarms apparently backs you up. It helps to have an accurate idea of what competent control software looks like, though.

                                                                                        1. 2

                                                                                          Some of them run Lisp.

                                                                                          Actually that article says they do not run Lisp anymore.

                                                                                          This is a very interesting talk by the same author about something slightly different but which touches on the same points: https://www.youtube.com/watch?v=_gZK0tW8EhQ

                                                                                        1. 10

                                                                                          IP version 6 (IPv6) is a new version of the Internet Protocol (IP),

                                                                                          Nice! Since I have a policy of not immediately jumping onboard with any new technology, I can reset my IPv6 clock. It has been 0 days since IPv6 was new.

                                                                                          1. 3

                                                                                            That statement seems to carry over from the 1998 RFC: https://www.rfc-editor.org/rfc/rfc2460.txt

                                                                                            1. 1

                                                                                              The perils of copy paste coding/standardizing. :)

                                                                                            2. 2

                                                                                              An IETF spokesperson was quoted today saying “you are why we can’t have nice things.”