1. 6

    I thought it was the perfect length. Rage against overwrought blog posts!

    1. 4

      Empirical Software Engineering: I followed a course at my university on this. It was an eye opener. I can publish some of my reviews and summaries sometime. Let me already give you some basic ideas:

      1. Use sensible data. Not just SLOC to measure “productivity,” but also budget estimates, man-hours per month, commit logs, bug reports, as much stuff you can find.
      2. Use basic research tools. Check whether your data makes sense and filter outliers. Gather groups of similar projects and compare them.
      3. Use known benchmarks in your advantage. What are known failure rates, how can these be influenced? When is a project even considered a success?
      4. Know about “time compression” and “operational cost tsunamis”: these are phenomena such as an increase in the total cost by “death march” projects, and how operational costs are incurred already during development.
      5. Know about quality of estimates, and estimates of quality and how these can improve over time. Estimates of the kind “this is what my boss wants to hear” are harmful. Being honest about total costs allows you to manage expectations: some ideas (lets build a fault-tolerant and distributed, scalable and adaptable X) are more expensive than others (use PHP and MySQL to build simple prototype for X).
      6. Work together with others in business. Why does another department need X? What is the reason for deadline X? What can we deliver and how to decrease cost, so we can develop oportunity X or research Y instead?
      7. Optimize on the portfolio level. Why does my organization have eight different cloud providers? Why does every department build its own tools to do X? What are strategic decisions and what are operational decisions? How can I convince others of doing X instead? What is organizational knowledge? What are the risks involved when merging projects?

      Finally, I became convinced that for most organizations software development is a huge liability. But, as a famous theoretical computer scientist said back in the days: we have to overcome this huge barrier, because without software some things are simply impossible to do. So keep making the trade off: how much are you willing to lose, with the slight chance of high rewards?

      1. 2

        Any books you want to recommend?

        1. 1

          I’m also interested in that but online articles instead of books. The very nature of empiricism is to keep looking for more resources or angles as new work might catch what others miss. Might as well apply it to itself in terms of what methods/practices to use for empirical investigation. Let get meta with it. :)

      1. 2

        Maybe another way to think about this is “Can I not do FP in my language?”. Yes for JavaScript and Scala and Rust - you can write procedural code to your heart’s content in these languages, even if JavaScript gives you the tools to use functional abstractions and Scala and Rust actively encourage them. No for Haskell and Elm - there’s no way to write code that looks imperative in these langauges.

        1. 9

          No for Haskell and Elm - there’s no way to write code that looks imperative in these langauges.

          main = do
            putStrLn "What is your name?"
            name <- getStr
            putStrLn $ "Hello, " ++ name
          
          1. 5

            No for Haskell and Elm - there’s no way to write code that looks imperative in these langauges.

            What do you mean by “looks imperative”? Doing everything inside the IO monad is not much different from writing a program in an imperative language.

            1. 2

              You mean StateT and IO. And then learning how to use both.

            2. 3

              Writing Haskell at my day job, I’ve seen my fair share of Fortran written in it. The language is expressive enough to host any design pathology you throw at it. No language will save you from yourself.

            1. 5

              Although I haven’t coded in it, I found this language is interesting in a lot of ways. Drawing on the traits of languages in the title already gives it a ton of potential if done right. It does have a REPL already. It’s key components started at under a thousand lines of code each. It was written in Nim, leverages it for GC + concurrency, and can use either Nim or C for performance reasons. I was strongly considering writing a LISP or Smalltalk variant in Nim for past few months for similar reasons. The output would be very different since I want to leverage various analyses for safety-critical tooling. The common thinking seems to have been Nim as a strong, base language for macros, portability, and performance. As in Spry, one also needs something to drop down to when new language isn’t cutting it for whatever reason.

              It’s going on the Bootstrapping page since it might be used for or inspire something along those lines. :)

              1. 2

                Nick, you like Nim? I didn’t know that!

                1. 5

                  Well, I just looked at its features and example code. What I saw was a language that had improvements in readability, power, and safety over C-style systems languages that also outputs C. That’s definitely nice. It has potential both on its own and as a point in design space of what to do next.

                  My personal favorites were always PreScheme and Modula-3 for right balance of power, compile speed, runtime speed, relative ease of machine analysis, and implementation simplicity. You will rarely if ever see that combo since the simple implememtations almost always trade off power, performance, or comparable safety. The complex ones hard to compile or analyze.

                  Before Rust, Julia, and Nim, my recommendation was to embed a version of Modula-3 in PreScheme with its macros and malleability to boost power. However, the syntax, default includes, and output would all be C to just drop it in the ecosystem. Add in Ada or Cyclone style safe-by-default properties. Although I never tried to build it, Ive gotten to see pieces of it form in other languages: Rust learned from Cyclone; Julia was a LISP internally that made including C effortless; Nim had Pythonic style abd macros compiling to C.

                  Still nothing has all traits of PreScheme and Modula-3 but I keep looking for anything close that might be sub/super-setted into such a language. I had even considered embedding Rust or Nim into Racket with its macros and IDE but it may be too much mismatch. Still hope, though, that system programming can get leaps above C/C++ with some increment better than Rust, D, Ada, or Nim in balancing prior goals.

                  Hope all that makes sense. Also, such a language woukd make my Brute Force Assurance concept easier that combines all the verification tooling of many languages for one. I think I already wrote about it here but not sure.

              1. 39

                Perhaps build systems should not rely on URLs pointing to the same thing to do a build? I don’t see Github as being at fault here, it was not designed to provide deterministic build dependencies.

                1. 13

                  Right, GitHub isn’t a dependency management system. Meanwhile, Git provides very few guarantees regarding preserving history in a repository. If you are going to build a dependency management system on top of GitHub, at the very least use commit hashes or tags explicitly to pin the artifacts you’re pulling. It won’t solve the problem of them being deleted, but at least you’ll know that something changed from under you. Also, you really should have a local mirror of artifacts that you control for any serious development.

                  1. 6

                    I think the Go build system issue is a secondary concern.

                    This same problem would impact existing git checkouts just as much, no? If a user and a repository disappear, and someone had a working checkout from said repository of master:HEAD, they could “silently” recreate the account and reconstruct the repository with the master branch from their checkout… then do whatever they want with the code moving forward. A user doing a git pull to fetch the latest master, may never notice anything changed.

                    This seems like a non-imaginary problem to me.

                    1. 11

                      I sign my git commits with my GPG key, if you trust my GPG key and verify it before using the code you pulled - that would save you from using code from a party you do not trust.

                      I think the trend of tools pulling code directly from Github at build time is the problem. Vendor your build dependencies, verify signatures etc. This specific issue should not be blamed directly on Github alone.

                      1. 3

                        Doesn’t that assume that the GitHub repository owner is also the (only) committer? It’s unlikely that I will be in a position to trust (except blindly) the GPG key of every committer to a reasonably large project.

                        If I successfully path-squat a well-known GitHub URL, I can put the original Git repo there, complete with GPG-signed commits by the original authors, but it only takes a single additional commit (which I could also GPG-sign, of course) by the attacker (me) to introduce a backdoor. Does anyone really check that there are no new committers every time they pull changes?

                        1. 3

                          Tags can be GPG signed. This proves all that all commits before the tag is what the person signed. That way you only need to check the people assigned to signing the tagged releases.

                    2. [Comment removed by author]

                      1. 2

                        Seriously, if only GitHub would get their act together and switch to https, this whole issue wouldn’t have happened!

                        1. 4

                          I must have written this post drunk.

                    1. 1

                      What a wonderful and provocative article.

                      1. 3

                        Praise modular implicits!

                        1. 2

                          Correct me if I’m wrong, but worst-case backtracking is only a problem in regular expression parsing if you don’t first construct a deterministic finite state machine, right?

                          1. 1

                            Presumably they are talking about a specific regex engine that only does backtracking. I believe .NET’s default regex engine is one such example.

                          1. 1

                            See also, although it didn’t get much discussion then.

                            1. 1

                              Oops. I didn’t search hard enough.

                            1. 7

                              Laziness is neat. But just not worth it. It makes debugging harder and makes reasoning about code harder. It was the one change in python 2->3 that I truly hate. I wish there was an eager-evaluating Haskell. At least in Haskell, due to monadic io, laziness is at least tolerable and not leaving you with tricky bugs (as trying to consume an iterator in python twice).

                              1. 6

                                I had a much longer reply written out but my browser crashed towards the end (get your shit together, Apple) so here’s the abridged version:

                                • Lazy debugging is only harder if your debugging approach is “printfs everywhere”. Haskell does actually allow this, but strongly discourages it to great societal benefit.

                                • Laziness by default forced Haskellers to never have used the strict-sequencing-as-IO hack that strict functional languages mostly fell victim to, again to great societal benefit. The result is code that’s almost always more referentially transparent, leading to vastly easier testing, easier composition, and fewer bugs in the first place.

                                • It’s impossible to appreciate laziness if your primary exposure to it is the piecemeal, inconsistent, and opaque laziness sprinkled in a few places in python3.

                                • You almost never need IO to deal with laziness and its effects. The fact that you are conflating the two suggests that you may have a bit of a wrong idea about how laziness works in practice.

                                • Haskell has the Strict language extension which turns on laziness by default. It’s very rarely used because most people experienced enough with Haskell to know about it prefer laziness by default. This is experimental evidence that laziness by default may actually be a good idea, once you’ve been forced to grok how it’s used in practice.

                                1. 1

                                  Haskell has the Strict language extension which turns on laziness by default. It’s very rarely used because most people experienced enough with Haskell to know about it prefer laziness by default. This is experimental evidence that laziness by default may actually be a good idea, once you’ve been forced to grok how it’s used in practice.

                                  I am not quite sure whether this is really evidence. I actually never tried to switch it on. Iwonder whether that option plays nicely with existing libraries, I gues not many are tested for not depending on lazy-evaluation for efficient evaluation. If you use Haskell and Hackage, I guess you are bound with rolling with the default.

                                  1. 2

                                    It works on a per-module basis. All your modules will be compiled with strict semantics, and any libraries will be compiled with the semantics they chose.

                                2. 3

                                  Idris has strict evaluation. It also has dependent types, which are amazing, but strict evaluation is a pretty good perk too.

                                  1. 2

                                    I thought there were annotations for strictness in Haskell.

                                    1. 3

                                      yes, but I consider it to be the wrong default. I’d prefer having an annotation for lazy evaluation. I just remember too many cases where I have been bitten by lazy evaluation behaviour. It makes code so much more complicated to reason about.

                                      1. 1

                                        Do you happen to remember more detail? I enjoy writing Haskell, but I don’t have a strong opinion on laziness. I’ve seen some benefits and rarely been bitten, so I’d like to know more.

                                        1. 1

                                          I only have vague memories to be honest. Pretty sure some where errors due to non-total functions, which I then started to avoid using a prelude that only uses total ones. But when these occured, it was hard to exactly find the code path that provoked it. Or rather: harder than it should be.

                                          Then, from the tooling side I started using Intero (or vim intero). (see https://github.com/commercialhaskell/intero/issues/84#issuecomment-353744900). Fairly certain that this is hard to debug because of laziness. In this thread there are a few names reporting this problem that are experienced haskell devs, so I’d consider this evidence that laziness is not only an issue to beginners that haven’t yet understood haskell.

                                          PS: Side remark, although I enjoy haskell, it is kind of tiring that the haskell community seems to conveniently shift between “Anyone can understand monads and write Haskell” and “If it doesn’t work for you, you aren’t experienced enough”.

                                    2. 2

                                      Eager-evaluating Haskell? At a high level, Ocaml is (more or less) an example of that.

                                      It has a sweet point between high abstraction but also high mechanical sympathy. That’s a big reason why Ocaml has quite good performance despite a relatively simple optimizing compiler. As a side effect of that simple optimizing compiler (read: few transformations), it’s also easy to predict performance and do low-level debugging.

                                      Haskell has paid a high price for default laziness.

                                      1. 2

                                        As a side effect of that simple optimizing compiler (read: few transformations), it’s also easy to predict performance and do low-level debugging.

                                        That was used to good effect by Esterel when they did source-to-object code verification of their code generator for aerospace. I can’t find that paper right now for some reason. I did find this one on the overall project.

                                        1. 1

                                          Yes, however I would like to have Typeclasses and Monads I guess, that’s not OCaml’s playing field

                                          1. 1

                                            OCaml should Someday™ get modular implicits, which should provide some of the same niceties as typeclasses.

                                            1. 1

                                              OCaml has monads so I’m really not sure what you mean by this. Typeclasses are a big convenience but as F# has shown are by no means required for statically typed functional programming. You can get close by abusing a language feature or two but you’re better off just using existing language features to accomplish the same end that typeclases provide. I do think F# is working on adding typeclasses and I think the struggle is of course interoperability with .Net, but here’s an abudantly long github issue on the topic. https://github.com/fsharp/fslang-suggestions/issues/243

                                            2. 1

                                              F# an open source (MIT) sister language is currently beating or matching OCaml in the for fun benchmarks :). Admittedly that’s almost entirely due to the ease of parallel in F#.
                                              https://benchmarksgame.alioth.debian.org/u64q/fsharp.html

                                            3. 1

                                              Doesn’t lazy io make your program even more inscrutable?

                                              1. 1

                                                well, Haskell’s type system makes you aware of many side-effects, so it is a better situation than in, for example, Python.

                                                Again, I still prefer eager evaluation as a default, and lazy evaluation as an opt-in.

                                              2. 1

                                                Purescript is very close to what you want then - it’s basically “Haskell with less warts, and also strict” - strict mainly so that they can output clean JavaScript without a runtime.

                                              1. 4

                                                Transcribed for those who like copy-pasting:

                                                When we look at the past of computerdom, it’s through a lens which is very peculiar because things have changed so much, so fast. To me, those fifty years that I’ve been in the computer field have gone so quickly that the past seems ever present. In the 60s and 70s, a lot of young people started communes, and it was a combination of free love - which is a term you don’t hear anymore because we take it for granted - and pot, and LSD, and idealism, and hopes for a new kind of economy, and that spirit of that age leaked into the computer world. There was a sense of possibility in the beginning that is different because we thought that computing would be artisinal. We did not imagine great monopolies… we thought the citizen programmer would be the leader. When I say we, I mean I, but of course I had a sense I was sharing this with a lot of people. The visions of democratization, of citizen participation, created vistas of possibility for artistic expression and artistic expression in software. And software is an art form, though not generally recognized as such. And because of Moore’s Law - which had been stated to me not as Moore’s Law, but just as a general principle, things were going to get faster and cheaper - we will be able to afford it. Right now, a computer with a screen is $35,000, tomorrow, who knows, it’ll be a $100, something.Sso that now is the time to start thinking about what would be the documents of the future. As I would abstract it now, the two concepts were: we can have parallel connections between visible documents. So you can have two pages with a connection saying, “this sentence is connected to that paragraph,” and so there’s a visible strap or bridge, and you can’t do that yet. So that was one of my hypertext concepts. And the other hypertext concept was to be able to click on something, and jump to it. So as the hypertext concept developed - and deteriorated - over the years, only the jump link became popular in the hypertext systems in the 60s and 70s. And then Tim Berners Lee created the World Wide Web… which was the sixth or seventh hypertext system on the internet. People think it sprang from (unintelligible, please help) but it was just a clean job that had the clout of CERN behind it. How to see the possibilities when there are so many things around you that are in a certain way… I don’t know. The future is an unknown place, and there are a lot of scary things about it. And what aspects you’re going to approach, whether you’re going to go on thinking about leisure, or about the terrible problems that confront the world, all I can say is: close your eyes. And think what might be. My first software designs were largely done with my eyes closed, thinking, “Now if I hit that key, what should happen, if I hit this key, what should happen.” I was able to imagine - they say this can’t be done - but when my interfaces were built, they always felt the way I knew they would. And the people at Xerox PARC said, “That’s never possible, you never know how it’s going to feel.” But I did.

                                                1. 5

                                                  Whatever blogging tools or CMSs we use to publish on the web, we ought to start finding ways to have them generate well-designed books as well as web pages. The services and technology we need are already sitting there, ready to print those books for us.

                                                  Indeed they are. These are the man-pages for the UNIX operating system.. They were written in troff, a semantic markup language not unlike the markup language one probably uses to write their blogposts in. A typesetter read the troff files and produced these beautifully typeset document pages.

                                                  History isn’t always obsolete.

                                                  1. 0

                                                    This problem is not easy to fix, but it’s not impossible either. I’ve mostly fixed it for myself.

                                                    A little arrogant I think!

                                                    1. 8

                                                      Very interesting. Having seen some similar visual languages come and go, I do hope it works out well for them. I think that pd and its cousin Max/MSP have shown that visual dataflow languages can be successful, at least in niche domains.

                                                      Regarding the universality, composability, and dual-representation claims they make, there’s some important prior art which they do not cite on their marketing page but I sure hope they’re aware of: Milner’s Bigraphical model.

                                                      1. 2

                                                        That guy was really prolific. I love seeing his name pop up everywhere.

                                                      1. 8

                                                        I have a theory. It’s not that users don’t want push notifications.

                                                        Sorry, your theory is wrong.


                                                        Ok. Hyperbole aside. Here is my optimal user experience: I go to a website and find out I like it. Maybe it’s 538 because I’m a politics junkie and I love their blog posts. I go “jeez, I love these posts so much. I’d really like a notification for whenever there’s a new one.” So I see the “subscribe to this blog” button, which is exactly what I wanted. Hot dog. I press it, and I get the request from the browser to allow notifications. “Uh, duh,” I say to myself, “I definitely want this. Hurry up!” I press the button, and bingo, I’m subscribed on my own terms.

                                                        Fast forward a week. I figure out real fast that only Nate Silver’s posts are interesting. I’m ignoring the notifications now for all other blog posts whenever I see them. So I click on his name because I’m interested in reading his past blog posts when I notice that there’s a subscribe button for Nate Silver’s posts specifically. “Oh,” I say to myself, “I’d like this instead.” I subscribe to Nate Silver’s posts, and then I unsubscribe from the whole site. The point is, I’m interacting with the content I want. The site itself is servile. I am the master.

                                                        Fast forward five months. Everyone went a little crazy with the push notifications. I did, too, and now I’m accidentally subscribed to 248 websites. I get push notifications ever 120 milliseconds, on average. This is insane. Suddenly, trumpets blare in the distance. Hark, Vivaldi browser (who?) has added a new browser feature: notification queue. It just stores all your notifications in a list so you can see them later. Awesome… sort of. I’m getting a million of these. Well have no fear. You know how Google Inbox rolled out that sweet semantic categorization of emails so that I don’t have to see advertising ever again? Vivaldi does that, now. So I don’t even have to worry about the accidental advertising, it’s just lost forever in my queue and I never check it. Nice! (Not nice for Newegg).

                                                        Ok, but how do we handle the volume? Easy: add a merge option to push notification. Here’s an example. I’m subscribed to the Dharma and Greg Art Forum (???). I get a lot of notifications because I’m revered for my hyper realistic Dharma and Greg fan art. From a user perspective I don’t want to see two-thousand notifications from DGAF. I just want to see the most recent notification with a lot of info in it. Like a synopsis. “40 messages, last message from DharmaStalker85”. So set the merge option to Merge-Most-Recent. I’m sure they could be a lot more clever with this if they needed to. You could also add priority options like Important or Breaking, which needs no elaboration and the utility is very self evident.

                                                        Look at the web we have, now. I don’t have to visit any of the sites I like to check if there’s something new. I start by checking out my notifications for sites I care about. I prioritize, filter, search based on my whims. Google Chrome revamps their Now product to manage push notifications. It recommends me to check out sites I haven’t looked at in a while (or surreptitiously prioritizes sites that pay it a small fee…). It manages my notifications in evolving and more interesting ways. I already get interesting app notifications on my phone (new Netflix movies, new podcasts, etc.), so it’s time for the browser to catch up, too.

                                                        1. 4

                                                          I read this comment expecting the punchline at the end to be “and it was really just an RSS feed all along”.

                                                          1. 1

                                                            I wrote something like that and deleted it…

                                                        1. 16

                                                          99.9999999% of pushnotification are pure shit exactly like advertisement. Why one should be better than the other ?

                                                          1. 5

                                                            Hmm, every single push notification I get on my phone is useful (new podcast published, new chat message from people I care about, period ticket for public transport has expired). Every web push notification is similar (someone pinged me in IRC, someone sent me a message on Telegram). If 99.999% of the push notifications you receive are shit, you have failed as an administrator of your personal devices.

                                                            1. 17

                                                              you have failed as an administrator of your personal devices

                                                              it’s quite the industry fail that this is even a role that exists.

                                                              1. 2

                                                                I administrate pretty aggressively as well, but I found it only manageable, at least on Android, if you start by blocking all notifications, and then opt in very selectively. Specifically, I have every app’s notifications blacklisted in the main Android settings, and then I have selectively enabled notifications from only two (Signal and Messages).

                                                                Before I discovered you could do that, I found curating notifications to be too much of a game of whack-a-mole that repeatedly wasted my time and energy. Even when I’d get into a state where I was happy with the notifications I was getting, it was always temporary, because many apps will add new types of notifications when the app auto-updates, and opt you in to them by default. Then you have to try to dig through each app’s settings menu (each one different and seemingly deliberately complicated) to figure out where this new notification is coming from. Examples from the past few months of apps that have done this: Twitter, Google Maps, Maps.me. After this happened repeatedly, I got tired of it and just blacklisted them all. If they weren’t so aggressively trying to spam me, I wouldn’t mind notifications from some (e.g. I found some of the Google Maps transit notifications useful), but not at the cost of every other app update adding a new kind of notification to advertise McDonald’s locations to me.

                                                                1. 1

                                                                  I have somehow never installed any notification-spammy Android apps. Well, almost — SoundHound occasionally shows some junk, but it’s so rare I haven’t even bothered to disable it.

                                                                  The notification I see the most is “tap to update Firefox Nightly” :D

                                                                2. 2

                                                                  Note that you added an important qualifier, that you receive. This is a subset of all notifications.

                                                                  1. 2

                                                                    I mean 100% of the push notification they ask me for permission and that I refuse. I should never have refuse something that I did not wanted first

                                                                  2. 3

                                                                    90% of everything is crap. 90% of HTML’s img tag usage is crap.

                                                                    Those statements are not very useful by themselves. User have to limit number of sites that he uses or a browser, with optional extension, has to filter all the sites.

                                                                    If I ever implement my idea of a search engine that doesn’t index ad serving and maybe also JavaScript serving sites then I will see if that kind of web is useful.

                                                                    1. 1

                                                                      Starting to hear neil postman’s ghost wail “I told you so”.

                                                                    1. 3

                                                                      Nope. It’s 2017, it’s time to stop parsing strings with regular expressions. Use structured logging.

                                                                      No thanks! I’ll stick to strings.

                                                                      1. 2

                                                                        Could you please explain why? Your comment, as it is, is not bringing any value.

                                                                        1. 1

                                                                          Not the OP but here’s why I don’t like structured logging

                                                                          • logs will ultimately be read by humans and extra syntax gets in the way.
                                                                          • structured logging tends to bulk the log with too much useless information.
                                                                          • most of the use cases of structured logging could be better handled via instrumentation/metrics.
                                                                          • string based logs can be emitted by any language without dependencies so every system you manage could have compatible logging.

                                                                          Arguably a space separated line is a fixed-schema structured log with the least extraneous syntax possible.

                                                                          1. 6

                                                                            To me (in the same order):

                                                                            • logs are ultimately read by human once correctly parsed/sorted. Which means that it should be machine readable first so that it can be processed easily to create a readable message.
                                                                            • Too much informations is rarely a problem with logging, but not enough context is often an issue.
                                                                            • Probably, but structured logging still offers some simpler ways for this.
                                                                            • You just push the format problematic from the sender (that can use a simple format) to the receiver (that has to parse different formats according to what devs fancy)

                                                                            To me the best recap on why I like structured logging is: https://kartar.net/2015/12/structured-logging/

                                                                            1. 2

                                                                              most of the use cases of structured logging could be better handled via instrumentation/metrics.

                                                                              Speaking as a developer of Prometheus, you need both. Metrics are great for an overall view of the system and all its subsystems, but can’t tell you about every individual user request. Logs tell you about every request, but are limited in terms of understanding the broader system.

                                                                              I’ve wrote a longer article that touches on this at https://thenewstack.io/classes-container-monitoring/

                                                                        1. 6

                                                                          Pretty easy to cheat if you recognize those integers and can write a for loop. I was hoping for better art, though…

                                                                          1. 2

                                                                            Lol, no way, it was worth it. All the effort and mystery around something so kitsch. I’m glad I got to experience this.

                                                                            1. 2

                                                                              The artist told me that was intentional. I think it was interesting, whether intentional or not.

                                                                            1. 2

                                                                              The URL is a timestamp, measured in seconds.

                                                                              1. 1

                                                                                Ok. So far, you’d think this whole thing boils down to “once I update my Linux distro with the latest fixes, I just want to make sure I’m not running on ancient hardware”. And since virtually all x86 hardware made this decade has PCID support, everything is fine. Right? That was my first thought too. Then I went and check a bunch of systems. Most of the Linux instances I looked in had no pcid feature, and all of them were running on modern hardware. Oh Shit.