Threads for gerikson

  1.  

    Previously submitted 6 years ago: https://lobste.rs/s/6ih126/freebsd_lesson_poor_defaults (6 comments)

    It’s the same URL, I am not sure why the dupe detector did not catch this.

    1.  

      That was a link to the .txt version, it’s now .html.

      This link gets shared around every now and then, and my response is always the same: there is some useful insight, but there’s also information that’s so outdated it provides no value, outright misinformation, and self-contradiction. Some of the technical points are fair, and should be and are being addressed. But the commentary is often laughably wrong. The document seems more focused on advancing an agenda than a good-faith effort at improving security in FreeBSD.

    1. 14

      What an unfortunate logo.

      1. 18

        I think it’s nice.

      1. 4

        FWIW, downstreams don’t seem to be a fan of this - at least Alpine.

        1. 3

          The Alpine reasoning seems flawed to me. Having a problem with the fact that a package has decided to hard code a host when the user only specified the protocol and resource (i.e. you said https://foo.jpg so we’re just gonna pick https://rando.com/foo.jpg) is one thing. But the reasoning given for rejecting the package was that the actual host hard coded may become malicious at some unspecified point in the future. The same could be said about literally any HTTPS endpoint anywhere.

          1. 5

            IPFS is a content distribution platform, so there’s going to be content that the “media mafia” doesn’t like (just like torrents). ffmpeg is a tool for consuming such content. If you have a (silent) hardcoded gateway/proxy for IPFS within ffmpeg, and IPFS takes off as the next torrent, and everyone happily uses the same proxy, it’s possible for copyright enforcement agents to silently suborn the proxy and keep it running to gather evidence.

        1. 2

          I have 2 more weeks of vacation starting. We’re down at my parents for a big ole shindig with cousins on my mother’s side.

          I have some plans to refactor HN&&LO. The DB queries are … suboptimal.

          Stuff needs to be done around the home.

          Oh, and Elden Ring.

          1. 7

            Use of tools that were still wet was part of the culture.

            But it hardened incredibly quickly. Why do Makefile directives have to start with a tab? It was an easy hack and by the time the author realized it was a bad idea, dozens of people where reliant on it!

            Tabs and Makefile (2015)

            1. 3

              I interpreted that claim about “use of tools that were still wet” as an admission of this very fact; saying that it still had a very bad idea in it (tabs) but people used it anyway.

              1. 1

                You don’t have to use tabs. Just override .RECIPEPREFIX

                1. 2

                  Now I’m itching to write a makefile using emojis instead of tabs. Or one of those weird non visible Unicode characters.

              1. 8

                There seems to be some lede-burying going on. Not having heard of this before, even after reading the first few pages of the spec I’m still wondering what the justification is for reinventing floating-point arithmetic. Is this format more accurate? Faster? Easier to implement?

                It’s been kind of nice having a single standard for floats. Aside from endianness, you don’t have to worry about which format some library uses, or having to tag the type of a value, or choosing which format to use in new code. Unlike, say, character encodings, which used to be a horrible mess before UTF-8 took over.

                1. 10

                  The main sales pitch for Posits is that for a given number of bits, typical numerical computations retain more accuracy. Or conversely, you can use fewer bits to achieve a given level of accuracy.

                  Posits have better semantics. As a language implementor, I have beefs with the semantics of IEEE floats, which do not map properly onto real numbers, and have shittier mathematical properties than is necessary. The worst problem is NaN, and the rule that NaN != NaN. My language supports equational reasoning, and has an equality operator that is an equivalence relation: a==a, a==b implies b==a, a==b and b==c implies a==c. The semantics of negative 0 is also a big problem. The infinities are easier to deal with. These problems are fixed by Posits.

                  1. 6

                    Not all mathematical entities obey transitive equality, though, e.g. infinities. The behavior of NaNs is useful because the end result of a computation can reflect that something within it overflowed or produced an illegal result; you don’t have to test every individual operation.

                    If Posits don’t support infinities nor NaNs, then operations on them need different error handling — division by zero has to return some kind of out-of-band error code or throw an exception, and then the code that calls it has to handle it. That would be an issue for languages like JavaScript, where division by zero or sqrt(-1) don’t throw an exception, rather return an infinity or NaN.

                    1. 4

                      In IEEE floats, there are a bevy of non-real values: -0, +inf, -inf, and the NaNs. Posit has a single unique error value called NaR: Not a Real. This is returned for division by zero.

                      In IEEE float, positive underflow goes to 0 and negative underflow goes to -0. So 0 ends up representing both true zero and underflowed positive reals. -0 represents underflowed negative reals, in some sense, but it’s messier than that. This design is also not symmetric around zero. -0 is neither an integer nor a real, and in practice, every numeric operation needs to make arbitrary choices about how to deal with it, and there is no mathematical model to guide these choices, so different programming languages make different choices. What’s sign(-1)? Could be 0, -0 or -1 depending on what mathematical identities you want to preserve, or depending on an accident of how the library code is written.

                      In Posit, 0 denotes true zero, which is easy to understand mathematically. Positive numbers underflow to the smallest positive number. Negative numbers underflow to the smallest negative number. This design is simple, symmetric around zero, and doesn’t introduce a non-real number with unclear semantics.

                      1. 1

                        The difference between 0 and -0 is important for many numeric applications, +/-Infinity is important, NaN is important vs Infinity. 1/0 and 0/0 are mathematically distinct, those are real values, -1/0 is again a mathematically distinct value, saying they’re “non-real” is nonsense and does not match the most basic of mathematics.

                        What is important on the basis of a normal day-to-day needs of a person does not match what is important when you’re actually performing the kind of numeric analysis that is needed in scientific computation. So please don’t say that they are non-real, and please don’t claim that these aren’t necessary just because you personally don’t use them.

                        1. 5

                          You raise an important point. The precise semantics of IEEE floats are important to a lot of numeric code, because people code to the standard. Posits are not backward compatible with IEEE floats, and this is a serious issue that will hinder adoption. Posits break some of my code as well.

                          But there’s nothing sacred about IEEE floats. They aren’t the best possible design for floating point numbers. The Posit proposal comes out of a community that has discussed a variety of alternative designs: they write papers and hold conferences. These people work in high performance computing and are numerical analysts. There are papers on the new idioms that must be used to write numeric code using Posits, explaining the benefits.

                          Please do not claim that 1/0 and 0/0 are real numbers. This is not mathematically correct. These entities are not members of the set ℝ of real numbers. In mathematics, there are a variety of extensions to the reals that add additional values (such as infinities), but these additional values are not real numbers. For example,

                          1. 2

                            I think that some of the things you describe as “coding to the standard” are the post-factum view of the IEEE standard being specifically designed to handle cases that were difficult to handle in other schemes. (Please note that I don’t want to claim anything bad about Posits by this (I haven’t studied the standard in enough detail) – I just want to point out that some things that we are occasionally annoyed by in IEEE 754 really do have practical use).

                            IIRC signed zero, for example, didn’t arise because the representation is nasty, nor as an unpleasant compromise to make other, more important things possible at the expense of an ambiguous representation. It was in fact a deliberate choice, which ensured that, for complex analytic functions, expressions that represent the same function inside their domain will usually have the same value on the boundary as well. This is a pretty useful property, as lots of engineering problems derive from, or are defined in terms of, boundary conditions. Many systems that don’t have signed zero require that you e.g. be careful to use either sqrt(z^2 - 1) or sqrt(z+1)*sqrt(z-1) depending on boundary conditions, even though they both mean the same thing.

                            The same thing goes for signed infinities. These aren’t error cases, they’re legit values that propagate through calculations. Pre-IEEE754 real number representation systems that didn’t allow infinities usually did so either at the expense of ambiguity in e.g. inverse trigonometric functions, or by quietly introducing special non-propagating extensions to handle these cases. (I don’t recall the specifics of any representation system that used projective extension. Somehow I doubt that would’ve floated too many engineers’ boats – I’d have a hard time coping with the existential dread induced by a discontinuous exponential function, for example. Even if it turns out to be numerically irrelevant in most cases, I’d have to either be careful about mixing numerical results with analytically-derived conclusions, or carefully rewrite all the math involved in analysing transient systems to account for discontinuities, and I’m really not looking forward to that).

                            Maybe Posits avoids these problems – like I said, I haven’t studied the standard in detail, and I’m not trying to bash on it. Just wanted to point out that lots of things which now look like standard warts were actually deliberate decisions made to handle real-life situations, not compromises introduced to allow for better handling of other things.

                            1. 1

                              IIRC signed zero, for example, didn’t arise because the representation is nasty, nor as an unpleasant compromise to make other, more important things possible at the expense of an ambiguous representation. It was in fact a deliberate choice, which ensured that, for complex analytic functions, expressions that represent the same function inside their domain will usually have the same value on the boundary as well.

                              Yes; see much ado about nothing’s sign bit.

                            2. 1

                              I was using they are “real” in the sense that these are a mathematical concept that exist in the reality of math. Much like 0 this was not acknowledged for most of history, and as such ℝ does not include them. Claiming that they do not exist in ℝ does not mean that they magically cease to exist, any more than 0 does not exist, or that irrational numbers do not exist.

                              1/0 is well defined, and has a sound mathematical definition, they may not be in ℝ but that doesn’t make them cease to exist and is simply an artifact of the age of ℝ. That there are a group of people who do arithmetic that don’t need a floating point format that reflects the possible non-finite possibilities does not negate that those values exist, nor does it negate their value to other users.

                              Posits do not offer any particularly meaningful improvement in what can be represented, it has demonstrable reduction in what can be represented, and circuitry to implement it uses more area and is slower.

                              1. 1

                                Posits are meant to represent approximations of members of ℝ, the Real numbers. Therefore, it doesn’t make sense to include representations for things that aren’t members of ℝ.

                                1. 2

                                  In that case posits aren’t a replacement for IEEE floating point, and should stop claiming that they are. The values being disregarded by posits because they aren’t in R are useful, that is why they are there. In the early specification process every feature was under a lot of pressure for performance given the technology of the error. Even something we take for given - gradual underflow - was on the chopping block until intel shipped x87 to show that other manufacturers were wrong to say that what was being specified was “impossible” to implement efficiently (it’s also why fp80 has a decidedly more wonky definition than fp32,64,etc).

                                  So the idea that the perf gains that posits get from eliding these features were known, and very heavily hashed out in the 80s, where there was much more pressure against additional logic than there is today, and yet even in that environment they decided to keep them.

                                  So it isn’t surprising the eliding those values make posits “simpler”, but you could also make them simpler and faster by having a fixed exponent - it would greatly reduce usefulness of course, but I give this absurd extreme to demonstrate that everything is trade offs. Posit dropped values that are useful for real world purposes because posit folk don’t use them, and that’s fine, but you don’t get to claim you have a replacement when you are fundamentally not solving the same problem.

                                  Also, as one final thing, posits don’t support those values to gain some performance back, but despite that hardware implementations are slower and use more area to implement. So to me it seems posits remain a lose/lose proposition.

                      2. 1

                        also, posits always use bankers rounding

                        1. 5

                          yup, but there are real reasons that you want different rounding, which is why ieee754 specifies them.

                          1. 2

                            Okay, but the need to control rounding modes is pretty rare, and support is hit and miss. Hardware doesn’t provide a consistent way to control the rounding modes, if they are supported at all, and most programming languages don’t provide much, if any support. The current Posit standard focuses on just the core stuff that everybody needs, and that’s good. Features like rounding modes that not everybody is going to implement should be optional extensions, not mandatory requirements, and should be added later, if Posits take off.

                            I’ve personally not had a use for rounding modes, other than in converting floats to ints. The only rationale I’ve seen for rounding control on arithmetic operations is as a way for numerical analysts to debug precision problems in numeric algorithms by using interval arithmetic. The Posit community has a separate proposal for doing this kind of interval arithmetic using pairs of Posits (“valids”) that is claimed to have better numeric properties than using IEEE rounding modes, but I haven’t read more about that than the summary.

                            1. 1

                              Okay, but the need to control rounding modes is pretty rare, and support is hit and miss.

                              The need to care about numerical accuracy for floating point numbers in general is pretty rare. A lot of uses of floating point numbers are very happy with a hand-wave probably-fine approximation. For example, a lot of graphics applications have a threshold of human perception that is far higher than any floating point value (though can have some exciting corner cases where you discover that your geometry abruptly becomes very coarsely quantised when you render an object far from the origin).

                              For applications that do care, support is generally very good. Fortran has had fine-grained control over rounding modes for decades and it is supported by all Fortran compilers that I’m aware of. Most of the code that cares about this kind of thing is written in Fortran.

                              C99 also introduced fine-grained control over rounding modes into C. As far as I know, clang is the only mainstream C compiler that doesn’t properly support them (or, didn’t, 10 years ago - I think the Flang work has added the required support to LLVM and the front-end parts are fairly small in comparison). GCC, Visual Studio, XLC, and ICC all support them.

                              1. 1

                                In that case there is no difference in rounding modes, “bankers rounding” is what I would call “to even” but I think is more formally “to nearest or even if half” or some such

                        2. 8

                          Posits is one of those “obviously better” things that appear from time to time in techie circles, a bit like tau instead of pi.

                          I found the following previous submissions :

                          Edit: unums seem to be a superset of posits, here’s a submission about them: Unums and the Quest for Reliable Arithmetic. And Unums: A Radical Approach to Computation with Real Numbers (Gustafson’s paper).

                          1. 7

                            Legit though tau is better

                            1. 4

                              I await your 1.5 hour YouTube video explaining it ;)

                              1. 7

                                No need for a youtube video! A circle is uniquely defined by its center point and radius, but π is the ratio of the circumference to the diameter. This makes π exactly half the “elegant” value, so a lot of equations adds a factor of two “correction” that goes away if you use τ instead:

                                • A 1/4 turn of a circle is π/2 radians (instead of τ/4 radians)
                                • sin and cos are periodic around 2π (instead of τ)
                                • Most double integrations are of the form 1/2 Cx²: displacement is 1/2 at², spring energy is 1/2 kx², kinetic energy is 1/2 mv², etc. The one exception is area of a circle, which is 1 π r² (instead of 1/2 τr²).

                                It’s not like the end of the world that we use π instead, it’s just inelegant and makes things harder for a beginner to learn.

                                1. 7

                                  I was a member of the cult of τ back in high school and in my first years of engineering school, mostly because I was on really bad terms with my math teacher :-D. So at one point I τ-ized some of the courses I took.

                                  I can’t say I recommend it, at least not for EE. It’s not bad, but it’s not better, either. I was really in awe about it before, because it made the basic formulas “more elegant” and “mathematically beautiful”. But once I did enough math to run into practical issues, it just wore off, I found the effect negligible at best, and in some cases it just made some easy things easier at the expense of making hard things a little harder.

                                  First off, I found you wind up playing a lot of correction factor whack-a-mole. For example, working with τ instead of π makes it easier to work with sine signals (and Fourier series of periodical signals), because they’re periodic over τ. But it makes it harder to work with rectified sine signals because those are now periodic over τ/2.

                                  Most of the time, I found that working in terms of τ just moved the correction factors from pages 1-2 of my notes from each lecture to page 3 and onwards. (Note that I’m also using “rectified” rather loosely here – lots of quantities wind up effectively looking like rectified versions of other quantities, not just voltage fed to a rectifier).

                                  Then there were a bunch of cases where the change was basically inconsequential. For example, lots of the integrals that were brought up in various τ-related topics on the nerd forums I frequented were expressions written in terms of 2π, which seemed annoying to work with. Then I ran into the same integrals in various EE classes, except everyone was just writing (and using them) in terms of ω, as in 2πf. Whether you define it as 2πf or τf has pretty much no effect. You derive lots of stuff in terms of ω anyway, but ultimately, you really want to end up with expressions in terms of f, because that’s what you can actually measure IRL.

                                  In most of these cases, working in terms of τ just means you end up with an expression that starts with 1/τ instead of 1/2π (or τ instead of 2π), which hardly makes much of a difference. The expressions you end up with are all in the frequency domain, so their physical interpretation is in terms of “how fast is it spinning on the circle?”, not lengths or ratios of lengths, so τ and π work equally well.

                                  And then there were a whole lot of cases that you could simplify much more efficiently by applying some slightly cleverer math. For example, working in terms of τ does simplify a bunch of nasty integrals relevant to transient or oscillating regimes, as in, you don’t have to carry an easily-lost constant term in front of the integral. What really simplifies it though is working in s-domain via the Laplace transform, which you can do without caring if it’s τ or π because you’re working in terms of ω anyway, and which allows you to skip the whole nasty integral part entirely.

                                  Finally – I didn’t know it then, but I did think about it later – there are various things that work worse in terms of τ, like some of the discrete cosine transforms, which have nice expressions in terms of π, not 2π.

                                  Basically I wasted a couple of weeks of a summer vacation 15+ years ago to find out that, overall, it sucks just as much with both, it’s just that the parts that suck with π are different from the ones that suck with τ. I think that’s when I realised I should’ve really become a musician or, like, drop it all and go someplace nice and raise goats or whatever :(.

                                  (FWIW a lot of math I learned in uni was basically “how to avoid high school math”. I knew from my Physics textbook that calculus is really important for studying transient regimes, so by the time I finished high school I could do pretty hard integrals in my head. Fast forward to my second year of EE and ten minutes into the introductory lecture my Circuits Theory prof goes like okay, don’t worry, I know the math classes you guys took don’t cover the Laplace transform – I’m going to teach you about it because *gestures at a moderately difficult integral* I haven’t the faintest clue how to solve this, I haven’t done one of these since I was in high school and that was like forty years ago for me).

                                  1. 1

                                    Eh, I’m not convinced by the special pleading for the tau version of Euler’s identity:

                                    https://tauday.com/tau-manifesto#sec-euler_s_identity

                                    I prefer the original.

                                    (Apparently Euler was the one who popularized pi the symbol, and he vacillated between it meaning pi or tau.)

                                    I like pi because it hearkens back to the primeval discovery that if you have a round object, and measure its diameter with a piece of string (more easily done than its radius), then its circumference, the lengths don’t divide easily. Why is that?

                                    1. 4

                                      TBH I don’t understand why people find e^iπ + 1 = 0 so elegant. Why +1? You’re sneaking negative numbers in there to make the equation nice.

                                      I like pi because it hearkens back to the primeval discovery that if you have a round object, and measure its diameter with a piece of string (more easily done than its radius), then its circumference, the lengths don’t divide easily. Why is that?

                                      Even easier than measuring the diameter of a circle with a string is measuring the diagonal of a square, which gives you the even more primeval (and much easier to prove!) discovery that the diagonal doesn’t divide the sides of the square. It’s a lot easier to prove sqrt(2) is irrational than pi is irrational!

                                      1. 1

                                        Since the invention of the potter’s wheel, accessing an object that’s close to perfectly circular has been easier than one that’s perfectly square. According to El Wik, the potter’s wheel is from ~4000 BCE, so a curious kid in ancient Babylon could wonder about the ratio of the circumference to the diameter long before a more privileged one in ancient Greece learned how to construct a square using straight-edge and divider and measured the diagonal (and got murdered by the Pythagoreans for exposing the secret)[1]

                                        In day-to-day use, diameters are almost universally used: pipes, firearm calibers, screws… I have a tape measure with a scale that’s multiplied by pi so that you can get the diameter by wrapping the tape around an object.

                                        Sure, all of this can be handled by tau too, but outside the classroom, a radius is much more abstract than a diameter.

                                        [1] actual murder probably apocryphal

                                      2. 2

                                        https://tauday.com/tau-manifesto#sec-euler_s_identity

                                        I was already convinced that tau is better for constructing radian arguments to trig functions (tau is a full turn). But the Euler identity is so much more elegant using tau. The pi version never made intuitive sense, but the tau version does make intuitive sense to me. Thanks for pointing it out.

                                        1. 2

                                          I find the Euler identity most elegant in its full form.

                                           𝑖𝑥
                                          𝑒   = cos(𝑥) + 𝑖 sin(𝑥)
                                          
                                  2. 4

                                    I prefer the Indiana definition :D

                                    1. 9

                                      I wrote about the history of that redefinition! It’s wild. https://buttondown.email/hillelwayne/archive/that-time-indiana-almost-made-p-32/

                                      1. 1

                                        I do love that they didn’t even get a correctly rounded value :D

                              1. 5

                                This post is disappointing clickbait. Reviewing other submissions from this domain shows much richer content. Puzzling!

                                https://lobste.rs/domain/lemire.me

                                1. 2

                                  Lemire is known for some really advanced content in regards to optimisations, data structures magic, hashing, etc. I am also puzzled by this post.

                                1. 3

                                  It’s all about sales volume. It costs around $1 billion to tape out a 7nm chip. Apple sells 30 million Macs a year, so assuming one new chip a year, that’s $33 per Mac. Microsoft sells about 1 million Surfaces a year, so if they were to do their own chip, it would cost over $900 per unit just for the tape out cost.

                                  1. 3

                                    I seem to remember Apple’s computer chips sharing a lot of technology with their phone chips, which would spread out the cost even more.

                                    1. 3

                                      Yes, and the newer iPads uses the M1 as well. They’ve done a phenomenal job achieving massive economies of scale, which is hard for any one competitor to beat.

                                    2. 1

                                      ISTM that while this is true, it is only partial.

                                      Yes, Apple’s SoCs are in-house and proprietary.

                                      But that does not mean that everyone’s have to be. There is no particular reason that I can see why MS has to do the same thing and design its own SoC in-house, but this is an unstated assumption of your argument.

                                      MS is free to buy Arm SoCs from whoever it wants, and port to them. MS is a rich company, and it also has deep specialisation in OS design. One of the factors limiting portability of Arm OSes is that there’s no standard firmware.

                                      MS has Arm firmware. It has sold both Arm-based devices running both Windows (the Surface RT) and Android.

                                      I wrote about the Surface RT nearly a decade ago: https://www.theregister.com/2013/11/14/microsoft_surface_rt_stockpile/

                                      Now there is also the Surface Duo: https://www.microsoft.com/en-us/surface/devices/surface-duo#overview

                                      MS has Arm chops. It has or had the skills, the expertise, the design nous. (If that’s gone it’s the company’s own fault.)

                                      If there are no other Arm SoCs that are truly competitive with Apple’s ones, then that is not MS’s fault. It the the wider Arm industry’s fault.

                                      But MS can write its own firmware for various Arm SoCs and port its Arm Windows to various hardware from various vendors and cherry-pick the best open-market Arm hardware.

                                      You imply it must do it in-house. It doesn’t.

                                      1. 1

                                        MS is free to buy Arm SoCs from whoever it wants, and port to them. MS is a rich company, and it also has deep specialisation in OS design. One of the factors limiting portability of Arm OSes is that there’s no standard firmware.

                                        The problem is that no one else is producing M1-competitive cores. The Arm SoCs that you can buy are either:

                                        • Mobile phone chips from companies like Qualcomm or Samsung. These aim for a very low power draw.
                                        • Server chips from companies like Ampere. These focus on performance and power efficiency in SoCs on the order of 100W.

                                        No one is producing laptop-grade chips. A laptop SoC has a lot in common with a mobile phone chip these days (which is why Apple can share costs between their phones, tablets, and laptops) but it’s a separate SoC and so incurs a load of NRE that you need to then amortise over a large number of units. If someone starts making a successful high-end Android tablet, then Windows on Arm laptops could benefit from that, but until then there’s the problem that few people buy Arm Windows laptops because they run phone chips and no one is building laptop chips for them because the market is too small.

                                        This might change if it becomes possible to install Windows on Apple Arm laptops and people do in large quantities: that would show a company like Qualcomm that there’s a big market for a high-performance Arm Windows laptop.

                                        1. 1

                                          Well, yes, I agree entirely.

                                          That is the problem, and the responsibility, of Arm licensees and vendors, though.

                                          AFAICT: nobody took Arm seriously for ~30 years now, and considered them as only being important for embedded stuff and battery-powered devices without active cooling. (As a former Acorn Archimedes owner, this pains me.)

                                          So: nobody has invested in performance-optimised Arm SoCs.

                                          However, the message to which I replied blamed MS for this and seemed to expect MS to fix it by paying for the development of high-performance Arm SoCs for its Surface devices.

                                          That isn’t MS’s responsibility, and I think it would be actively harmful to the industry if it did so.

                                    1. 3

                                      This is a bit of a throwback to 90ies/00ies Linux advocacy (fits right next to the ‘Windows is like a microwave dinner, Linux like a deliciously cooked meal’). While mildly funny, the world is probably better served by making the Linux desktop really good than cheap shots at the competition.

                                      1. 2

                                        Linux-tinged hatred of Microsoft is a meme in the original sense - I’m surprised seeing nerds a generation younger than me essentially repeating the same tropes people I knew did in the late 90s. “Halloween Documents”, EEE, you name it. Never mind that Microsoft (though it may still be evil) has reinvented itself almost entirely since that time, and in my inexperienced eyes it doesn’t look like Linux on the desktop has made any appreciable progress…[1]

                                        Lunduke is around 30 and seems to be making a good living trading in these tropes so there’s obviously a market for it among his audience.

                                        [1] I use Linux daily, but in a 80x24 terminal like God intended.

                                      1. 4

                                        People that run their own blog love talking about how they run their own blog 😉

                                        1. 4

                                          This has been the theme of blogging since blogging began.

                                          1. 2

                                            I wonder if anyone’s wrote a blog about the history of blogging – especially with the technical details

                                            1. 4

                                              This podcast series is called “An Oral History of the Blogosphere”, it might cover some stuff

                                              https://podcasts.apple.com/us/podcast/oral-history-of-the-blogosphere-episode-5-brad-delong/id1332780055?i=1000567420848

                                              Disclaimer: it’s hosted by Lawyers, Guns, and Money, a US political blog. So its views may not be universal.

                                        1. 14

                                          So, how do we teach Haskell to kids or help adults master its’ power faster and more efficiently?

                                          We need to start from Type Theory, make people understand and fall in love with Types, their structure, their daily lives, their relations with each other; a student has to become fluent in writing and understanding type-level functions before she even thinks about writing “real” functions. It is actually a pretty traveled road as well — but only  if you have a reasonably strong math background.

                                          If your approach to teaching kids starts with “learn category theory”, it’s a bad approach.

                                          1. 9

                                            This is such a common pitfall of novice teachers.

                                            They think teaching should start from the foundations of the field, the same way they perceive the relationships between the content as experts.

                                              1. 4

                                                I, for one, am advocating for higher kinded types in Scratch.

                                              1. 11

                                                This was previously submitted 2 years ago: https://lobste.rs/s/crfy2d/timsort_fastest_sorting_algorithm_you_ve (17 comments).

                                                The URLs are different so the dupe detector didn’t fire.

                                                1. 9

                                                  Prompted by the current top story: The dangers of Microsoft Pluton - would this attack be mitigated by something like Pluton?

                                                  1. 10

                                                    Yes and no. The TPM measures the UEFI code and adds it to a PCR (basically a running hash of everything that’s been fed into it). This means that it would detect a modification of the UEFI code and, because the PCR value for the firmware doesn’t match, wouldn’t release the key for decrypting a BitLocker / LUKS-encrypted volume or any WebAuthn tokens or any other credentials stored in the TPM. There are two possible failure modes:

                                                    First, if the bootkit is installed before the first SecureBoot boot, then the keys will be released only if you boot with the compromised firmware and you’ll need to do the recovery thing to boot with the non-compromised version. If the malware is installed early on in the supply chain before you do the OS install, then Pluton / TPM is no help.

                                                    Second, the symptom that the user sees is likely to be incomprehensible. They will see an error saying BitLocker needs them to enter their recovery key because the TPM has detected a change to the UEFI firmware. For most users, this will read as ‘enter your recovery key because wurble mumble wibble fish banana’ and so they will either enter their recovery key (if they kept it somewhere safe) and grant the malware access to everything or reinstall their OS (if they lost their recovery key) and grant the malware access to everything.

                                                    So, it would be more accurate to say that something like Pluton can detect such malware and prevent it from compromising a user’s data, but it is easy for the user to circumvent that protection.

                                                    1. 4

                                                      but it is easy for the user to circumvent that protection

                                                      I would even go so far as to say the user is induced to circumvent that protection.

                                                    2. 14

                                                      Pluton is for securing company computers against employees, and streaming video against computer “owners”, not for securing your machine against nation-state and organised crime actors.

                                                      1. 8

                                                        I’m confused why this was downvoted; it’s correct and answers the question. I think someone may have thought this was unrelated political posturing? If so, please read it again. It is a direct answer to the question it’s responding to.

                                                        1. 4

                                                          Not the flagger, but I think a direct answer could refer to the technical differences in protections asserted by Pluton vs these UEFI attacks. Microsoft themselves refer to nation-state actors and cybercriminals in the copy around Pluton, and I remain unclear whether there’s an overlap here.

                                                          1. 9

                                                            That’s quite fair. On my own background knowledge, Pluton does not establish a complete chain of trust for the firmware in the way that ie. ChromeOS does, and therefore does not prevent bootkits. At best it provides a fallible approach to detecting bootkits, but a sophisticated attacker would be able to circumvent this detection in common circumstances.

                                                            Empty rhetoric about all the threats that are out there is quite common in the security world, and Microsoft’s rhetoric about Pluton is in that category. I could get into why this makes sense for them as marketing strategy, but that would perhaps verge on being too much politics.

                                                            1. 1

                                                              IIRC, currently Pluton firmware just implements a TPM, but they promised to add lots more things in the near future. It’s a bit more than just rhetoric since they have actually built the hardware side of things?

                                                              1. 1

                                                                Sorry, just now seeing this! That’s quite fair. I’m not familiar with Microsoft’s future plans, so I’m not able to speak to that.

                                                        2. 2

                                                          How does a UEFI bootkit circumvent the protections offered by Pluton/TPM?

                                                        3. 4

                                                          Yes, at least partially. The modification of BIOS code would be detected and access to secrets like Bitlocker or LUKS keys could be denied, if the system was set up correctly. Of course now there’s a question of what the user would do in that case, they might just enter the backup key and re-seal the secret, which wouldn’t do anything. The more proper way would be to check with the BIOS vendor whether the measurement the TPM is getting matches with any of their versions or not, and if not, promptly re-flash their BIOS. This doesn’t need Pluton, any old TPM would do though, Pluton just has more security in a case of physical access.

                                                          1. 1

                                                            Do BIOS flash utilities work in this scenario? It seems like the utility has to be booted with UEFI so it’s too late to trust it…? Though I guess it has to work when the device is bricked by a bad BIOS, so there’s some even lower-level way to boot the utility?

                                                            1. 1

                                                              You can of course try booting it from a USB stick and try re-flashing it, and see if that returns it to a good state. if it doesn’t, you could probably re-flash the SPI flash itself with an inexpensive programmer, but that requires some knowledge and definitely isn’t doable by an end user.

                                                              1. 1

                                                                What I’m wondering is, couldn’t the bad BIOS just hook the flash utility the same way it does the OS? What is the accepted secure way for an end user to completely factory-restore the machine? Because that seems like the rational and intended response to the Bitlocker TPM change message.

                                                                1. 2

                                                                  If you have reason to believe the device is compromised at that low a level, don’t keep using the device. Yes, nobody who’s not a big organization can afford to just throw laptops away, but it’s also quite impractical - especially on closed hardware - to be sure you re-flashed everything that needs to be re-flashed. You should be trying really hard to not be in this scenario in the first place.

                                                        1. 4

                                                          Meta: @puschx has already updated the title as per mod log so suggest feature is no longer applicable. Please append “(2020)” to the title.

                                                          1. 6

                                                            Compares their solution to “Unix wc

                                                            Links to GNU coreutils wc

                                                            🤔

                                                            1. 11

                                                              Seeing as the comments seem to have devolved into routine systemd-bashing, I’d just like to say that I enjoyed this submission a lot. I learned about Unsplash’s API, a bit about systemd timers, and was reminded that Linux/FOSS allows users to (relatively) easy create workarounds to lacking functionality and share them.

                                                              1. 6

                                                                This is a 500 error page, so instead of the image saying something like: “Drawing of a woman screwing in a lightbulb”, it makes more sense to use “500 internal server error” as the alt text.

                                                                It’s worth mentioning here that a browser used with a screenreader will indicate unambiguously that the user has landed on an error page, regardless of the page content. I usually don’t bother reading the content, because … what’s the point?

                                                                “Drawing of a woman screwing in a lightbulb” is excellent wordplay. The parse is ambiguous. The first thing I thought was, “How did they get a person inside a lightbulb?” Or maybe it’s just the drawing inside the bulb? “Drawing of a woman replacing a lightbulb” would be less loaded. For even less ambiguity: “This drawing depicts a woman replacing a lightbulb”.

                                                                1. 4

                                                                  I’d argue that exposing the error code at all is an accessibility problem. It requires the user to know what 403 (or what have you) means, and there’s no reason for a non-technical user to know that. For any diagnostic purpose, the user can examine the actual HTTP response via other tools.

                                                                  1. 10

                                                                    I’d argue that exposing the error code at all is an accessibility problem. It requires the user to know what 403 (or what have you) means,

                                                                    This mindset, taken to its extreme, leads to error messages like “something went wrong”. It’s at least partially responsible for things like the hijacking of NXDOMAIN by ISPs. NXDOMAIN hijacking serves two purposes: it prevents the “unwashed masses” from having to deal with a failure case, and it funnels them to ISP-controlled ad portals.

                                                                    I’m a strong believer in the old maxim that knowledge is power. Depriving a user of knowledge deprives them of agency.

                                                                  2. 2

                                                                    “Screwing in” is ambiguous too as some light bulb fixtures are bayonet-type.

                                                                    1. 3

                                                                      That doesn’t make it ambiguous.

                                                                    2. 2

                                                                      Yeah, that didn’t make sense to me either – even without that, the error is already described on the left, so why re-describe it in the alt text? If anything, the image is decorative so the “responsible” take should be to use alt=”” to have screen readers ignore it? (https://www.w3.org/WAI/tutorials/images/decorative/)

                                                                      1. 1

                                                                        It’s worth mentioning here that a browser used with a screenreader will indicate unambiguously that the user has landed on an error page, regardless of the page content.

                                                                        Is this based on the assumption that the screen reader will read the title of every newly loaded page, or something else?

                                                                        1. 2

                                                                          Is this based on the assumption that the screen reader will read the title of every newly loaded page, or something else?

                                                                          Presumably I’ve just gotten lucky and always seen error pages with stock titles. So I have to walk back that assertion.

                                                                          1. 1

                                                                            I would imagine it would be based on the HTTP response code.

                                                                        1. 4

                                                                          This is the most vague and ‘vibe-based’ of them all, and is the least important. But I like it when software has a name that has ‘kawaii’ in the acronym or is named after a fictional character or has a name that’s just plain fun. Obsidian also falls in this category; even though I can’t think of how the name relates to note-taking, it’s just a nice name for software.

                                                                          If I made high quality public tools I would absolutely give them the most embarrassing names possible. Vicious Pumpkin. Daktaklakpak. “Tau is the correct circle constant”. ZZZ, but the first Z must be the American Z and the other two must be the British Zed. This pronunciation will be in the official standard.

                                                                          1. 2

                                                                            I really think that I like K-9 Mail in part because of its name and logo. 😊 Will be sad to see it switch to Thunderbird.

                                                                            1. 2

                                                                              I mean, when it comes to “embarrassing” you can’t really outdo Coq.

                                                                              1. 4

                                                                                Coq is proudly Gallic.

                                                                                Likewise, in France the e-mail program Pine is proudly phallic.

                                                                                There’s a reason the French refer to the byte as an ‘octet’, because ‘bite’ is basically ‘cock’…

                                                                                1. 1

                                                                                  That’s not embarrassing, that’s shameful.

                                                                              1. 2

                                                                                Why just have one monorepo when you could have several monorepi?

                                                                                1. 3

                                                                                  Is this something like the famous multiple single points of failure?

                                                                                1. 3

                                                                                  I feel like these APIs really should say “is ASCII numeric”. Hell, “is ASCII alphabetic” while we’re at it. These APIs tend to get used when dealing with certain kinds of machine readable files, and when they get used in normal text there’s always something semantically off.

                                                                                  I have been bit by numeric detection considering Japanese characters to be numeric.

                                                                                  Meanwhile I have never seen Japanese software say “actually you can use Japanese numerals for this import file number”. It’s ASCII (for numbers of course).

                                                                                  1. 1

                                                                                    You can always rely on the well-known property of the Western Arabic numerals to never have a code point above U+0039.