1. 1

    There is no real technical content in the link, just vague hints and repeated (see the book for more info) all over. The page doesn’t make it seem like it’s a description of the book - it makes it seem like “here’s how to build this” then doesn’t deliver.

    That and the fact that the page starts off by talking about how it’s all open source and shared with the community was really incongruent. Flagged as spam.

    1. 1

      I’m pretty sure that even Gmail spam detection system is better at detecting “spam”. There are two links in this site to “backshed” forum detailing the circuit, pcb print, schematics and firmware. The main page is a showcase/summary. The book is just a comprehensive guide/compilation, you can build without it. It’s mostly about wrapping the TF and best practices.

    1. 1

      Gallium Arsenide was also used in the Convex C2 and the Tera MTA.

      1. 21

        Looks functionally neat and aesthetically pleasing, but my terminal emulator is definitely something that I want to be open-source. Also, I get the need for it for this particular software, but a privacy policy for a terminal emulator? Pass.

        1. 5

          “Great research on colorschemes and contrast guidelines”, you say? #000 background and blue links?

          1. 5

            Above all, do no harm ;)

            1. 6

              The bg should be lighter, between #111 and #222 or something. And I would have chosen a warmer link color, orange or yellow - green even for that old terminal feeling. If they absolutely have to use blue, at least make it lighter.

              1. 6

                The research kevinc did included resources like the WCAG contrast guidelines. Do you have some you could point to on how these changes would improve readability?

                1. 7

                  The WCAG 2.1 contrast guidelines is about minimum contrast. A bg color of #000 and a fg color of #FFF would get full score in that respect. My point is that the contrast is too much and that dark mode is about making it easy on the eyes.

                  I understand that I’m being a bit picky here and I do see that you have put a lot of work into this, but compared to popular dark mode color themes or applications defaulting to dark mode, they usually have contrast ratio at 10-12, not 16.

                  And using very saturated colors for text against a dark background will make the colors vibrate and make the text hard to read. This is kind of dark mode 101 and the reason for my initial snarky comment - which I apologize for :) An easy fix is to desaturate the color and make it lighter.

                  That being said, I do appreciate you taking the time to implement this!

                  1. 5

                    Thanks for that. I took the tactic of keeping the same contrast ratios for different cases of black and white text as in light mode. WCAG AA had to do with the least visible text, for instance, the footer links and upvote counts were darkened in light mode to hit 4.5:1. Colored text did become paler for dark mode, just not by a lot.

                    By now many people have cited body text contrast being too high, and I’m inclined to agree that it doesn’t need to be as high in dark mode as in light mode. Whether to lighten the background has more to do with fitting into the OS environment alongside other dark mode apps and sites. Up to now that was a non-goal; I put the phone dark mode experience first. But I should not be surprised that so many Lobste.rs users prefer dark mode on their desktops in the daytime. :)

                  2. 3

                    Most people’s physiology perceives lower contrast between black and blue than between black and other primary colors. I do appreciate the effort you and kevinc made and I get that design by committee is super frustrating generally not a recipe for success. But it does take tweaking to get a color palette to fit a perceptual model better than RGB.

                    Sources:

                  3. 3

                    My goal for this PR was not to invent a new color palette but to choose dark-background appropriate variants of the existing palette. Entirely new colors were just out of scope this time.

                    Lightening the background is not an unexpected request, but we will want reasons. For examples of what we’ve thought through, check the pre-merge discussion in the PR.

                    1. 8

                      “A dark theme uses dark grey, rather than black, as the primary surface color for components. Dark grey surfaces can express a wider range of color, elevation, and depth, because it’s easier to see shadows on grey (instead of black).

                      Dark grey surfaces also reduce eye strain, as light text on a dark grey surface has less contrast than light text on a black surface.”

                      Material guidelines

                      I find it hard to read white on black, as it looks like headlights on a pitch black night to me, and I can’t see the text clearly, but I know it’s also the case that others need more contrast. When I’m reading dark-on-light, I need more contrast.

                      With a ‘softer’ look than black + white, the user should theoretically be able to set higher contrast as an option in their OS, but I have no idea how widely this is supported. I’ve just tried it with duckduckgo in Safari on MacOS and it did seem to work - though I’m not sure the page did anything itself.

                      “Prefer the system background colors. Dark Mode is dynamic, which means that the background color automatically changes from base to elevated when an interface is in the foreground, such as a popover or modal sheet. The system also uses the elevated background color to provide visual separation between apps in a multitasking environment and between windows in a multiple-window context. Using a custom background color can make it harder for people to perceive these system-provided visual distinctions.”

                      Apple guidelines

                      I’m not sure you can find/use the system colours on the web.

                      Here’s a desktop screenshot with lobste.rs visible - notice that it’s the only black background on the screen.

                      1. 2

                        Thanks for these details. In particular that screenshot is helpful.

                        It’s true that on your phone at night, the site may be your only light source. A goal of mine was that if any site is suddenly too bright for you, it shouldn’t be this one. But on your desktop, the site shares a lit environment with your other windows. The most common background color is perhaps a bit driven by fashion, but that’s a fact of life, so let’s deal with it. It is probably worthwhile to get along with the other windows on our screens.

                        Given that we already theme mobile CSS with a media query, what do you think about the phone use case?

                        1. 5

                          I don’t think you should rely on detecting mobile to target OLED screens. OLED screens are gradually becoming available on tablets and larger screens, and not all mobile screens are OLED. I’ve been trying to figure out a way to target OLED for web design and I don’t know of a good way to do it.

                          It’s a shame that the prefers-color-scheme options are just light and dark, rather than e.g. light, dark, and black. It seems like some people want pure OLED black and some don’t, and I’m not sure you’ll ever get them to agree.

                          Personally I’ve decided to err on the side of assuming everyone has OLED, because I’d like OLED screens to get the energy savings when possible, and I just personally like the aesthetic. If it’s good enough for https://thebestmotherfucking.website/ it’s good enough for me.

                          1. 2

                            Very dark gray is also efficient on OLED. It’s not a binary like #000 saves power and #222 is suddenly max power. I think power is just proportional to brightness?

                            1. 2

                              Interesting; I hadn’t known that about OLEDs. Your statement is correct according to this 2009 Ars Technica article citing a 2008 presentation about OLEDs:

                              power draw varies pretty linearly with mean gray levels

                              The article also has a table showing the power used by an OLED screen to display five example images of varying brightness. For mostly-white screens, OLEDs are actually less power-efficient than LEDs.

                          2. 2

                            I don’t usually have dark mode set on my phone but I’ve just tried it and… it’s surprising.

                            The black background doesn’t have the same problem here. I can read text without it looking like headlights at night in the rain!

                            Maybe it’s due to OLED, or maybe something to do with the phone seeming to dynamically adjust brightness? No idea but it’s fine.

                        2. 2

                          Thanks for the link. I’ve read through the discussion and I understand now that you have put some thoughts into this. That’s good enough for me :)

                  1. 3

                    Pulling a 24-hour work day. #sleepisfortheweak

                    1. 2

                      On the weekend? That sucks. How did that come about?

                      1. 1

                        I love pulling all-nighters or double all-nighters. That’s just what I do. Did 52 deploys the other day.

                    1. 1

                      You might be interested in Movitz; it’s a freestanding Common Lisp system for x86 PCs.

                      1. 2

                        I know! It was one of the inspirations.

                      1. 2

                        This was at 0 points, but without a hide or a flag? How is that possible on lobsters, as there is no downvoting?

                        1. 1

                          Stories receive an implicit upvote from the submitter; they may have clicked the arrow again to remove the upvote.

                          1. 2

                            Nope, that didn’t happen. Someone probably doesn’t like my submission. 🤷‍♂️

                        1. 2

                          The problem is an unfortunate incompatibility of licenses: perf is GPLv2, not GPLv2+, and libbfd is GPLv3+

                          That’s really stupid. Wait, why do people think that license compatibility matters for dynamic linking again? Isn’t it completely nuts to think that linking things dynamically creates a derivative work? Who does the “derivation” exactly? rtld in memory? By that line of thinking, running a GPLed program on Windows would imply that Windows needs to comply with the GPL.

                          (Also, regardless of that, why don’t the maintainers of these projects see this and think, “I’ll add the + or at least a special exception to fix this”?)

                          1. 4

                            The real question is why isn’t libbfd LGPL?

                            1. 2

                              libbfd isn’t intended for use by third-parties; it’s a holding place for code shared between binutils, gdb, and gcc.

                            2. 2

                              It’s only a problem for distros that only let privileged developers submit binary packages, because of redistribution problems. I’m currently on NixOS and my version of perf is linked with libbfd.

                              1. 1

                                Could you elaborate on this? How does a privileged group of developers relate to distribution problems?

                                1. 2

                                  If a distro only uses ports trees, then distributing binaries is an optimization rather than an essential part of software delivery. Ports trees generally work around composition issues because end users are allowed to run nearly any combination of software on their personal computers.

                                  1. 1

                                    I see, that definitely makes sense when you explain it that way.

                                    Yet it still feels weird to me, as I can easily download the deb src package to build and install manually. Even though it’a different from a ports tree in structure (deb src packages are tools for making the binary distribution, ports packages are optimizations of a source distribution), from an end user perspective it’s just different ergonomics.

                                    1. 2

                                      The key difference is probably how options are specified. As I understand it, a deb src package contains a recipe for building a .deb binary package. In contrast, a port typically contains a recipe for building a set of binary packages. Some of these may permit binary redistribution, others don’t. When the FreeBSD package build infrastructure runs, it will not build configurations that don’t allow binary distribution, but if you run poudriere locally then you can enable these options.

                                      Of course, perf could just link to LLVM’s Symbolizer library and get this functionality without any problems, but that’s probably a hard sell at Red Hat, which employs most of the GNU toolchain developers.

                            1. 4

                              This circumvents Microsoft’s anti-hijacking protections that the company built into Windows 10 to ensure malware couldn’t hijack default apps. Microsoft tells us this is not supported in Windows

                              Uhhh…

                              1. 21

                                Beware companies claiming they do something for the security of their users when it also affects their bottom line. Security, “anti-hijacking” and related terms are often used manipulatively (especially in EULAs!).

                                Restricting browser defaults choice is not an effective security feature for protecting user security or privacy:

                                1. Situation: viewing malware sites and suffering a drive-by-attack: I have no reason to believe Edge to be better (on average) than other major browsers.
                                2. Situation: malware addons: I have no reason to believe Edge to be better (on average) than other major browsers, all addon sites have reports of malware addons or addon authors turning bad (eg selling control of their successful addon).
                                3. Situation: malware already running on your computer, wants to change your default browser: by this point it’s too late, making ‘changing the default browser’ more obscure is not an effective defence of a user’s security or privacy.

                                Making it harder for users to change browser (and directly suggesting they do not do it with a little info box when they try, as Win10 does) is an effective method of enforcing market security. That’s not user security.

                                You start to get a sense of manipulation when you read Microsoft’s statements about edge and privacy::

                                Like all modern browsers, Microsoft Edge lets you collect and store specific data on your device, like cookies, and lets you send information to us, like browsing history, to make the experience as rich, fast, and personal as possible.

                                That’s straight out false. Not “all modern browsers” send information like “browsing history” to their makers. Notice how they have designed this sentence to make it feel normal and acceptable.

                                Whenever we collect data, we want to make sure it’s the right choice for you.

                                Uhuh. Is that the only reason you share data? Somehow you must be making money off this, otherwise you wouldn’t be doing it, right?

                                https://privacy.microsoft.com/en-ca/privacystatement

                                For example, we share your content with third parties when you tell us to do so, such as when you send an email to a friend, share photos and documents on OneDrive, or link accounts with another service.

                                Manipulative writing by business’ like this makes me ill. In a different content (eg flyers in your letterbox) this style of writing would be considered scam material.

                                1. 12

                                  Mozilla has been trying to convince Microsoft to improve its default browser settings in Windows since its open letter to Microsoft in 2015. Nothing has changed, and Windows 11 is now making it even harder to switch default browsers.

                                  Microsoft and anti-competitive practises go hand in hand, nothing to be surprised about.

                                  1. 4

                                    Was more concerned about the obvious security implications! If ff can do it, what is stopping malware from doing it?

                                    1. 15

                                      Likewise if Edge can bypass the mechanisms in the background, what’s stopping malware from doing it? Or apparently Firefox 😆😭

                                      1. 4

                                        Yep. I’m in a slightly weird position here: I think Microsoft is right to lock down that API; I just think they’re wrong for unlocking it for Edge. So I’d prefer neither Mozilla nor Edge could pull this stunt.

                                        1. 2

                                          Theoretically the mechanism could check that the software performing the bypass comes from microsoft (via cryptographic signature) and is therefore “safe”. It is possible for microsoft to allow Edge to bypass it and nothing else.

                                          I’m actually sort of surprised they didn’t, but I guess doing it properly would have taken more work.

                                          1. 2

                                            Or perhaps it was a silent protest by the engineers involved to allow firefox to do this.

                                        2. 6

                                          Nothing, of course, which isn’t too surprising, as this is pretty unlikely to have ever been about malware in the first place. If it had been, we’d have seen a real, secure API exposed to developers, whereas this is barely security by obscurity.

                                          1. 4

                                            Nothing is stopping malware engineers from adding associations; SetUserFTA has been available for years.

                                      1. 4

                                        This is actually not a bad rundown, though I feel like the discussion of UB lacks the correct nuance. When referring to integer overflow:

                                        The GNU C compiler (gcc) generates code for this function which can return a negative integer

                                        No, it doesn’t “return a negative integer”, it has already hit undefined-behaviour-land by that point. The program might appear to behave as if a negative integer was returned, but may not do so consistently, and that is different from having a negative integer actually returned, especially since the program might even exhibit odd behaviours that don’t correspond to the value being negative or the arithmetically correct value, or which don’t even appear to involve the value at all. (Of course, at the machine level, it might do a calculation which stores a negative result into a register or memory location; but, that’s the wrong level to look at it, because the presence of the addition operation has effects on compiler state that can affect code generation well beyond that one operation. Despite the claim being made often, C is not a “portable assembler”. I’m glad this particular article doesn’t make that mistake).

                                        1. 3

                                          What? The code in question:

                                          int f(int n)
                                          {
                                              if (n < 0)
                                                  return 0;
                                              n = n + 100;
                                              if (n < 0)
                                                  return 0;
                                              return n;
                                          }
                                          

                                          What the article is saying is that on modern C compilers, the check for n < 0 indicates to the compiler that the programmer is rejecting negative numbers, and because programmers never invoke undefined behavior (cough cough yeah, right) the second check when n < 0 can be removed because of course that can’t happen!

                                          So what can actually happen in that case? An aborted program? Reformatted hard drive? Or a negative number returned from f() (which is what I suspect would happen in most cases)? Show generated assembly code to prove or disprove me please … (yes, I’m tired of C language lawyers pedantically warning about possible UB behavior).

                                          1. 3

                                            because programmers never invoke undefined behavior

                                            They shouldn’t, but they often do. That’s why articles such as the one in title should be super clear about the repercussions.

                                            So what can actually happen in that case?

                                            Anything - that’s the point. That’s what the “undefined” in “undefined behaviour” means.

                                            (yes, I’m tired of C language lawyers pedantically warning about possible UB behavior).

                                            The issue is that a lot of this “possible UB behaviour” is actual compiler behaviour, but it’s impossible to predict which exact behaviour you’ll get.

                                            You might be “tired of C language lawyers pedantically warning about possible UB behaviour”, but I’m personally tired of programmers invoking UB and thinking that it’s ok.

                                            1. 1

                                              They shouldn’t, but they often do.

                                              Yes they do, but only because there’s a lot of undefined behaviors in C. The C standard lists them all (along with unspecified, implementation and locale-specific behaviors). You want to know why they often do? Because C89 defined about 100 undefined behaviors, C99 about 200 and C11 300. It’s a bit scary to think that C code that is fine today could cause undefined behavior in the future—I guess C is a bit like California; in California everything causes cancer, and in C, everything is undefined.

                                              A lot historically came about because of different ways CPUs handle certain conditions—the 80386 will trap any attempt to divide by 0 [1] but the MIPS chip doesn’t. Some have nothing to do with the CPU—it’s undefined behavior if a C file doesn’t end with a new line character. Some have to do with incorrect library usage (calling va_arg() without calling va_start()).

                                              I’m personally tired of programmers invoking UB and thinking that it’s ok.

                                              Undefined behavior is just that—undefined. Most of the undefined behavior in C is pretty straightforward (like calling va_arg() incorrectly), it’s really only signed-integer math and pointers where most of the problems with undefined behavior is bad. Signed-integer math is bad only in that it might generate invalid indices for arrays or for pointer arithmetic (I mean, incorrect answers are still bad, but I’m more thinking of security here). Outside of that, I don’t know of any system in general use today that will trap on signed overflow [2]. So I come back to my original “What?” question. The x86 and ARM architectures have well defined signed integer semantics (they wrap! I’ve yet to come across a system where that doesn’t happen, again [2]) so is it any wonder that programmers will invoke UB and think it’s okay?

                                              And for pointers, I would hazard a guess that most programmers today don’t have experience with segmented architectures which is where a lot of the weirder pointer rules probably stem from. Pointers by themselves aren’t the problem per se, it’s C’s semantics with pointers and arrays that lead to most, if not all, problems with undefined behavior with pointers (in my opinion). Saying “Oh! Undefined behavior has been invoked! Abandon all hope!” doesn’t actually help.

                                              [1] IEEE-754 floating point doesn’t trap on division by 0.

                                              [2] I would love to know of a system where signed overflow is trapped. Heck, I would like to know of a system where trap representations exist! Better yet, name the general purpose systems I can buy new, today, that use sign magnitude or 1s-complement for integer math.

                                              1. 2

                                                Because C89 defined about 100 undefined behaviors, C99 about 200 and C11 300

                                                It didn’t define them; It listed circumstances which have undefined behaviour. This may seem nit-picky, but the necessity of correctly understanding what is “undefined behaviour” is the premise of my original post.

                                                A draft of C17 that I have lists 211 undefined behaviours listed. An article on UB - https://www.cs.utah.edu/~regehr/ub-2017-qualcomm.pdf - claims 199 for C11. I don’t think your figure of 300 is correct.

                                                A bunch of the C11 circumstances for UB are to do with the multi-threading support which didn’t exist in C99. In general I don’t think there’s any strong reason to believe that code with clearly well-specified behaviour now will have UB in the future.

                                                So I come back to my original “What?” question

                                                It’s not clear to me what your “what?” question is about. I elaborated in the first post on what I meant by “No, it doesn’t “return a negative integer””.

                                                Compilers will for eg. remove checks for impossible (in the absence of UB) conditions and other things that may be even harder to predict; C programmers should be aware of that.

                                                Now, if you want to argue “compilers shouldn’t do that”, I wouldn’t necessarily disagree. The problem is: they do it, and the language specification makes it clear that they are allowed to do it.

                                                The x86 and ARM architectures have well defined signed integer semantics

                                                so is it any wonder that programmers will invoke UB and think it’s okay?

                                                This illustrates my point: if we allow the view of C as a “portable assembly language” to be propagated, and especially the view of “UB is just the semantics of the underlying architecture”, we’ll get code being produced which doesn’t work (and worse, is in some cases exploitable) when compiled by today’s compilers.

                                                1. 1

                                                  I don’t think your figure of 300 is correct.

                                                  You are right. I recounted, and there are around 215 or so for C11. But there’s still that doubling from C89 to C99.

                                                  No, it doesn’t “return a negative integer”, it has already hit undefined-behaviour-land by that point.

                                                  It’s not clear to me what your “what?” question is about.

                                                  Unless the machine in question traps on signed overflow, the code in question returns something when it runs. Just saying “it’s undefined behavior! Anything can happen!” doesn’t help. The CPU will either trap, or it won’t. There is no third thing that can happen. An argument can be made that CPUs should trap, but the reality is nearly every machine being programmed today is a byte-oriented, 2’s complement machine with defined signed overflow semantics.

                                                  1. 1

                                                    Just saying “it’s undefined behavior! Anything can happen!” doesn’t help

                                                    It makes it clear that you should have no expectations on behaviour in the circumstance - which you shouldn’t.

                                                    Unless the machine in question traps on signed overflow, the code in question returns something when it runs.

                                                    No, as already evidenced, the “result” can be something that doesn’t pass the ‘x < 0’ check yet displays as a negative when printed, for example. It’s not a real value.

                                                    The CPU will either trap, or it won’t

                                                    C’s addition doesn’t map directly to the underlying “add” instruction of the target architecture; it has different semantics. It doesn’t matter what the CPU will or won’t do when it executes an “add” instruction.

                                          2. 1

                                            Yes, the code generated does in fact return a negative integer. You shouldn’t rely on it, another compiler may do something different. But once compiled undefined behaviour isn’t relevant anymore. The generated x86 does in fact contain a function that may return a negative integer.

                                            Again, it would be completely legal for the compiler to generate code that corrupted memory or ejected your CD drive. But this statement is talking about the code that happened to be generated by a particular run of a particular compiler. In this case it did in fact emit a function that may return a negative number.

                                            1. 1

                                              When we talk about undefined behaviour, we’re talking about the semantics at the level of the C language, not the generated code. (As you alluded, that wouldn’t make much sense.)

                                              At some point you have to map semantics between source and generated code. My point was, you can’t map the “generates a negative value” of the generated code back to the source semantics. We only say it’s a negative value on the basis that its representation (bit pattern) is that of a negative value, as typically represented in the architecture, and even then we’re assuming that for instance some register (for example) that is typically used to return values does in fact hold the return value of the function …

                                              … which it doesn’t, if we’re talking about the source function. Because that function doesn’t return once undefined behaviour is invoked; it ceases to have any defined behaviour at all.

                                              I know this is highly conceptual and abstract, but that’s at the heart of the message - C semantics are at a higher level than the underlying machine; it’s not useful to think in terms of “undefined behaviour makes the function return a negative value” because then we’re imposing artificial constraints on undefined behaviour and what it is; from there, we’ll start to believe we can predict it, or worse, that the language semantics and machine semantics are in fact one-to-one.

                                              I’ll refer again to the same example as was in the original piece: the signed integer overflow occurs and is followed by a negative check, which fails (“is optimised away by the compiler”, but remember that optimisation preserves semantics). So, it’s not correct to say that the value is negative (otherwise it would have been picked up by the (n < 0) check); it’s not guaranteed to behave as a negative value. It’s not guaranteed to behave any way at all.

                                              Sure, the generated code does something and it has much stricter semantics than C. But saying that the generated function “returns a negative value” is lacking the correct nuance. Even if it’s true that in some similar case, the observable result - from some particular version of some particular compiler for some particular architecture - is that the number always appears to be negative, this is not something we should in any way suggest is the actual semantics of C.

                                            2. 0

                                              Of course, at the machine level, it might do a calculation which stores a negative result into a register or memory location; but, that’s the wrong level to look at it, because the presence of the addition operation has effects on compiler state that can affect code generation well beyond that one operation.

                                              Compilers specifically have ways of ensuring that there is no interference between operations, so no. This is incorrect. Unless you want to point to the part of the GCC and Clang source code that decides unexpectedly to stop doing that?

                                              1. 1

                                                In the original example, the presence of the addition causes the following negative check (n < 0) to be omitted from the generated code.

                                                Unless you want to point to the part of the GCC and Clang source code that decides unexpectedly to stop doing that?

                                                If that’s at all a practical suggestion, perhaps you can go find the part that ensures “that there is no interference between operations” and point that out?

                                                1. 1

                                                  In the original example, the presence of the addition causes the following negative check (n < 0) to be omitted from the generated code.

                                                  Right, because register allocation relies upon UB for performance optimization. It’s the same in both GCC and Clang (Clang is actually worse with regards to it’s relentless use of UB to optimize opcode generation, presumably this is also why they have more tooling around catching errors and sanitizing code). This is a design feature from the perspective of compiler designers. There is absolutely nothing in the literature to back up your point that register allocation suddenly faceplants on UB – I’d be more than happy to read it if you can find it, though.

                                                  If that’s at all a practical suggestion, perhaps you can go find the part that ensures “that there is no interference between operations” and point that out?

                                                  *points at the entire register allocation subsystem*

                                                  But no, the burden of proof is on you, as you made the claim that the register allocator and interference graph fails on UB. It is up to you to prove that claim. I personally cannot find anything that backs your claim up, and it is common knowledge (backed up by many, many messages about this on the mailing list) that the compiler relies on Undefined Behaviour.

                                                  Seriously, I want to believe you. I would be happy to see another reason of why having the compiler rely on UB is a negative point. For this reason I also accept a code example where you can use the above example of UB to cause the compiler to clobber registers and return an incorrect result. The presence of a negative number alone is not sufficient as that does not demonstrate register overwriting.

                                                  1. 2

                                                    There is absolutely nothing in the literature to back up your point that register allocation suddenly faceplants on UB

                                                    What point? I think you’ve misinterpreted something.

                                                    you made the claim that the register allocator and interference graph fails on UB

                                                    No, I didn’t.

                                                  2. 1

                                                    It isn’t the addition; the second check is omitted because n is known to be greater than 0. Here’s the example with value range annotations for n.

                                                    int f(int n)
                                                    {
                                                        // [INT_MIN, INT_MAX]
                                                        if (n < 0)
                                                        {
                                                            // [INT_MIN, -1]
                                                            return 0;
                                                        }
                                                        // [0, INT_MAX]
                                                        n = n + 100;
                                                        // [100, INT_MAX] - overflow is undefined so n must be >= 100 
                                                        if (n < 0)
                                                        {
                                                            return 0;
                                                        }
                                                        return n;
                                                    }
                                                    
                                                    1. 2

                                                      You’re correct that I oversimplified it. The tone of the person I responded to was combative and I couldn’t really be bothered going into detail again one something that I’ve now gone over several times in different posts right here in this discussion.

                                                      As you point out it’s the combination of “already compared to 0” and “added a positive integer” that makes the final comparison to 0 redundant. The original point, that the semantics of C, and in particular the possibility of UB, mean that a simple operation can affect later code generation.

                                                      Here’s an example that works without interval analysis: (edit: or rather, that requires slightly more sophisticated analysis):

                                                      int f(int n)
                                                      {
                                                          int orig_n = n;
                                                          n = n + 100;
                                                          if (n < orig_n)
                                                          {
                                                              return 0;
                                                          }
                                                          return n;
                                                      }
                                                      
                                              1. 3

                                                I haven’t seen that name in a long time. I can still feel it :D

                                                1. 3

                                                  I never had those, but I bought a number of Maxtor 20-40GB drives at the same time and I don’t think any of them lasted more than two years. IBM handled it very badly, but it was a pretty awful time for spinning rust. When the first laptops with SSDs came out and people were talking about the write limit of SSDs, a back-of-the-envelope calculation suggested that they were much more reliable than any spinning-rust disks that I’d had over the prior 10 years.

                                                  Disks have become a lot more reliable since the early 2000s. I don’t think I had a single disk last more than 3 years between about 1995 and 2005. I hope that the DeathStar was a bit of a wake-up call for the industry.

                                                  1. 1

                                                    You know those sick guys who freeze themselves hoping that there may be a cure in the future. Well, I kind of do that with all my old disks! Keeping them, hoping that one day, maybe, that they can be restored :D

                                                1. 3

                                                  Long time since an article has made me nod all the way down :)

                                                  1. 2

                                                    Here’s the abstract.

                                                    Multiplying matrices is among the most fundamental and compute-intensive operations in machine learning. Consequently, there has been significant work on efficiently approximating matrix multiplies. We introduce a learning-based algorithm for this task that greatly outperforms existing methods. Experiments using hundreds of matrices from diverse domains show that it often runs 100× faster than exact matrix products and 10× faster than current approximate methods. In the common case that one matrix is known ahead of time, our method also has the interesting property that it requires zero multiply-adds. These results suggest that a mixture of hashing, averaging, and byte shuffling−the core operations of our method−could be a more promising building block for machine learning than the sparsified, factorized, and/or scalar quantized matrix products that have recently been the focus of substantial research and hardware investment.

                                                    1. 1

                                                      Here’s the abstract.

                                                      Recent years have brought microarchitectural security into the spotlight, proving that modern CPUs are vulnerable to several classes of microarchitectural attacks. These attacks bypass the basic isolation primitives provided by the CPUs: process isolation, memory permissions, access checks, and so on. Nevertheless, most of the research was focused on Intel CPUs, with only a few exceptions. As a result, few vulnerabilities have been found in other CPUs, leading to speculations about their immunity to certain types of microarchitectural attacks. In this paper, we provide a black-box analysis of one of these underexplored areas. Namely, we investigate the flaw of AMD CPUs which may lead to a transient execution hijacking attack. Contrary to nominal immunity,we discover that AMD Zen family CPUs exhibit transient execution patterns similar for Meltdown/MDS. Our analysis of exploitation possibilities shows that AMDs design decisions indeed limit the exploitability scope comparing to In-tel CPUs, yet it may be possible to use them to amplify other microarchitectural attacks.

                                                      1. 2

                                                        Here’s the abstract.

                                                        Recent work showed that compiling functional programs to use dense, serialized memory representations for recursive algebraic datatypes can yield significant constant-factor speedups for sequential programs. But serializing data in a maximally dense format consequently serializes the processing of that data, yielding a tension between density and parallelism. This paper shows that a disciplined, practical compromise is possible. We present Parallel Gibbon, a compiler that obtains the benefits of dense data formats and parallelism. We formalize the semantics of the parallel location calculus underpinning this novel implementation strategy, and show that it is type-safe. Parallel Gibbon exceeds the parallel performance of existing compilers for purely functional programs that use recursive algebraic datatypes, including, notably, abstract-syntax-tree traversals as in compilers.