1. 2

    Figuring out how to move my desktop’s file system from LVM on one disk to something that can take advantage of all of the available disks.

    1. 2

      Non-Fiction: Improving Interrupt Response Time in a Verifiable Protected Microkernel and Why Dependent Types Matter.

      Fiction: The Trees.

      It’s going to be a good week, I can feel it.

      1. 2

        Reading up on LLVM-IR with the goal of being able to link multiple files and eventually add support for inline ASM to a programming language.

        1. 14

          Microsoft lets you download a Windows 10 ISO for free now; I downloaded one yesterday to set up a test environment for something I’m working on. With WSL and articles like this, I thought maybe I could actually consider Windows as an alternative work environment (I’ve been 100% some sort of *nix for decades).

          Nope. Dear lord, the amount of crapware and shovelware. Why the hell does a fresh install of an operating system have Skype, Candy Crush, OneDrive, ads in the launcher and an annoying voice-assistent who just starts talking out of nowhere?

          1. 5

            I’ll give you ads in the launcher – that sucks a big one – but Skype and OneDrive don’t seem like crapware. Mac OS comes with Messages, FaceTime and iCloud; it just so happens that Apple’s implementations of messaging and syncing are better than Microsoft’s. Bundling a messaging program and a file syncing program seems helpful to me, and Skype is (on paper) better than what Apple bundles because you can download it for any platform. It’s a shame that Skype in particular is such an unpleasant application to use.

            1. 3

              It’s not even that they’re useful, it’s that they’re not optional. I’m bothered by the preinstalled stuff on Macs too, and the fact that you have to link your online accounts deeply into the OS.

              I basically am a “window manager and something to intelligently open files by type kinda guy.” Anything more than that I’m not gonna use and thus it bothers me. I’m a minimalist.

              1. 2

                I am too, and I uninstall all that stuff immediately; Windows makes it very easy to remove it. “Add or Remove Programs” lets you remove Skype and OneDrive with one click each.

            2. 2

              Free?? I guess you can download an ISO but a license for Windows 10 Home edition is $99. The better editions are even more. WSL also doesn’t work on Home either. I think you need Professional or a higher edition.

              1. 2

                It works on Home.

                1. 1

                  Yup. Works great on Home according to this minus Docker which you need Hyper-V support for.

                  https://www.reddit.com/r/bashonubuntuonwindows/comments/7ehjyj/is_wsl_supported_on_windows_10_home/

              2. 1

                I always forget about this until I have to rebuild Windows and then I have to go find my scripts to uncrap Windows 10. Now I don’t do anything that could break Windows because I know my scripts are out of date.

                It’s better since I’ve removed all the garbage, but holy cats that experience is awful.

              1. 26

                Something clearly got this author’s goat; this rebuttal feels less like a reasoned response and more like someone yelling “NO U” into Wordpress.

                Out of order execution is used to hide the latency from talking to other components of the system that aren’t the current CPU, not “to make C run faster”.

                Also, attacking academics as people living in ivory towers is an obvious ad hominem. It doesn’t serve any purpose in this article and, if anything, weakens it. Tremendous amounts of practical CS come from academia and professional researchers. That doesn’t mean it should be thrown out.

                1. 10

                  So, in context, the bit you quote is:

                  The author criticizes things like “out-of-order” execution which has lead to the Spectre sidechannel vulnerabilities. Out-of-order execution is necessary to make C run faster.

                  The author was completely correct here, and substituting in JS/C++/D/Rust/Fortan/Ada would’ve still resulted in a correct statement.

                  The academic software preference (assuming that such a thing exists) is clearly for parallelism, for “dumb” chips (because computer science and PLT is cooler than computer/electrical engineering, one supposes), for “smart” compilers and PL tricks, and against “dumb” languages like C. That appears to be the assertion the author here would make, and I don’t think it’s particularly wrong.

                  Here’s thing though: none of that has been borne out in mainstream usage. In fact, the big failure the author mentioned here (the Sparc Tx line) was not alone! The other big offender of this you may have heard of is the Itanic, from the folks at Intel. A similar example of the philosophy not really getting traction is the (very neat and clever) Parallax Propeller line. Or the relative failure of the Adapteva Parallela boards and their Epiphany processors.

                  For completeness sake, the only chips with massive core counts and simple execution models are GPUs, and those are only really showing their talent in number crunching and hashing–and even then, for the last decade, somehow limping along with C variants!

                  1. 2

                    One problem with the original article was that it located the requirement for ILP in the imagined defects of the C language. that’s just false.

                    Weird how nobody seems to remember the Terra.

                    1. 3

                      In order to remember you would have to have learned about it first. My experience is that no one who isn’t studying computer architecture or compilers in graduate school will be exposed to more exotic architectures. For most technology professionals, working on anything other than x86 is way out of the mainstream. We can thank the iPhone for at least making “normal” software people aware of ARM.

                      1. 4

                        I am so old, that I remember reading about the Tomasula algorithm in Computer Architecture class and wondering why anyone would need that on a modern computer with a fast cache - like a VAX.

                      2. 1

                        For those of us who don’t, what’s Terra?

                        1. 2

                          Of course, I spelled it wrong.

                          https://en.wikipedia.org/wiki/Cray_MTA

                    2. 9

                      The purpose of out of order execution is to increase instruction-level parallelism (ILP). And while it’s frequently the case that covering the latency of off chip access is one way out of order execution helps, the other (more common) reason is that non-dependent instructions that use independent ALUs can issue immediately and retire in whatever order instead of stalling the whole pipeline to maintain instruction ordering. When you mix this with good branch prediction and complex fetch and issue logic, then you get, in effect, unrolled, parallelized loops with vanilla C code.

                      Whether it’s fair to say the reasoning was “to make C run faster” is certainly debatable, but the first mainstream out of order processor was the Pentium Pro (1996). Back then, the vast majority of software was written in C, and Intel was hellbent on making each generation of Pentium run single-threaded code faster until they hit the inevitable power wall at the end of the NetBurst life. We only saw the proliferation of highly parallel programming languages and libraries in the mainstream consciousness after that, when multicores became the norm to keep the marketing materials full of speed gainz despite the roadblock on clockspeed and, relatedly, single-threaded performance.

                      1. 1

                        the first mainstream out of order processor was the Pentium Pro (1996).

                        Nope.

                    1. 3

                      I think this would be a more effective article if it were titled “PostgreSQL Keys in Depth”. There’s a lot of advice in here that relies implicitly on how Postgres is implemented and if the advice is ported to a database that behaves differently (say SQL Server), that advice will lead to serious performance problems.