1. 2

    Yes and no. I certainly can derive pleasure and satisfaction from solving tricky problems elegantly; but the workaday production of banal software to deal with banal problems is just that – banal. It’s better than digging ditches, and I’m very glad that my early interest in computers has allowed me to make profane amounts of money doing something that is within shouting distance of “fun”, but I am not motivated particularly to code for code’s sake.

    This is largely why I decided that I wanted to get into technical management; I find the problems of people, systems, and culture much more challenging and engaging than those of programming.

    1. 6

      I worked on a really fun project where I ended up writing a compiler that, for lack of a better term, added materialized views to SQLite by taking an input language and spitting out a collection of tables and triggers. Inserts into primary tables triggered inserts into interior tables, which had triggers that inserted rows into additional interior tables, and so on, until the final materialized view was updated. The interior tables stored/cached partial join results to avoid recalculating them each time.

      It was used for building efficient and dependable embedded event correlation engines. It used a variation of the Rete algorithm implemented as a SQLite trigger program generated from the input language. It had neat things like justification (“why was this fact asserted? Because X, Y, Z. Why was Z asserted? Because A, B, C…”) and automatic consistency (removing any support for an event/fact would cause it to lose support and be withdrawn), and some fun stuff with recursive rules and action-selection mechanisms (“this rule says to withdraw fact A if I see B, but this rule says to assert A if I see C…what do I do if I see B and C?”)

      It was part of a DARPA project

      1. 2

        This is so cool. Where was this in 2006, when I was struggling with the explosion in complexity of the iTunes video pipeline?

      1. 2

        I spent some time a few years ago speeding up my own shell (I think I was up to around 5 seconds startup time), and replacing Oh-My-Zsh with hand-built files was one of the biggest wins.

        Out of interest I just timed it, and it’s still under half a second so I’m happy. (Although Zim looks interesting, thanks for pointing that out @aminb!)

        $ echo -n "$SHELL "; for i in {1..10}; do /usr/bin/time $SHELL -ic exit; done |& awk '{ i += $1 } END { print i/NR }'
        /usr/local/opt/zsh/bin/zsh 0.34
        

        How fast does your shell start?

        1. 3

          Nice :) And no problems.

          How fast does your shell start?

          Here’s mine:

          ➜ echo -n "$SHELL "; for i in {1..10}; do /usr/bin/time $SHELL -ic exit; done |& awk '{ i += $1 } END { print i/NR }'
          /bin/zsh 0.0206667
          
          1. 1

            $HOME/.brew/bin/fish : 0.0241765

          2. 1

            I really don’t understand why we need to optimize a shell.

            ; echo -n $SHELL' ';{for(i in `{seq 1 10}){/usr/bin/time $SHELL -ic exit}}>[2=1]|awk '{i+=$1}END{print i/NR}'
            /p9p/bin/rc 0
            
            1. 3

              Mostly to avoid flow state interruptions. I want tools that stay out of the way. Prompting for updates I just want to do work is dumb.

          1. 1

            They counter tracker device fingerprinting now too; sites will only get a subset of information by making all Safari sessions look the same. These improvements will be on both Mac and iOS.

            Yes! This is wonderful.

            1. 23

              Kinda late on UNIX bashing bandwagon :)

              Also, Windows owes more of it’s legacy to VMS.

              1. 10

                It does, but users don’t use any of the VMSy goodness in Windows: To them it’s just another shitty UNIX clone, with everything being a file or a program (which is also a file). I think that’s the point.

                Programmers rarely even use the VMSy goodness, especially if they also want their stuff to work on Mac. They treat Windows as a kind of retarded UNIX cousin (which is a shame because the API is better; IOCP et al)

                Sysadmins often struggle with Windows because of all the things underneath that aren’t files.

                Message/Object operating systems are interesting, but for the most part (OS/2, BeOS, QNX) they, for the most part, degraded into this “everything is a file” nonsense…

                Until they got rid of the shared filesystem: iOS finally required messaging for applications to communicate on their own, and while it’s been rocky, it’s starting to paint a picture to the next generation who will finally make an operating system without files.

                1. 10

                  If we talk user experiences, it’s more a CP/M clone than anything. Generations later, Windows still smells COMMAND.COM.

                  1. 6

                    yes, the bowels are VMS, the visible stuff going out is CP/M

                    1. 3

                      Bowels is a good metaphor. There’s good stuff in Windows, but you’ve got to put on a shoulder length glove and grab a vat of crisco before you can find any of it.

                  2. 10

                    I think you’re being a little bit harsh. End-users definitely don’t grok the VMSy goodness; I agree. And maybe the majority of developers don’t, either (though I doubt the majority of Linux devs grok journald v. syslogs, really understand how to use /proc, grok Linux namespaces, etc.). But I’ve worked with enough Windows shops to promise you that a reasonable number of Windows developers do get the difference.

                    That said, I have a half-finished book from a couple years ago, tentatively called Windows Is Not Linux, which dove into a lot of the, “okay, I know you want to do $x because that’s how you did it on Linux, and doing $x on Windows stinks, so you think Windows stinks, but let me walk you through $y and explain to you why it’s at least as good as the Linux way even though it’s different,” specifically because I got fed up with devs saying Windows was awful when they didn’t get how to use it. Things in that bucket included not remoting in to do syswork (use WMI/WinRM), not doing raw text munging unless you actually have to (COM from VBScript/PowerShell are your friends), adapting to the UAC model v. the sudo model, etc. The Windows way can actually be very nice, but untraining habits is indeed hard.

                    1. 6

                      I don’t disagree with any of that (except maybe that I’m being harsh), but if you parse what I’m saying as “Windows is awful” then it’s because my indelicate tone has been read into instead of my words.

                      The point of the article is that those differences are superficial, and mean so very little to the mental model of use and implementation as to make no difference: IOCP is just threads and epoll, and epoll is just IOCP and fifos. Yes, IOCP is better, but I desperately want to see something new in how I use an operating system.

                      I’ve been doing things roughly the same way for nearly four decades, despite the fact that I’ve done Microsoft/IBM for a decade, Linux since Slackware 1.1 (Unix since tapes of SCO), Common Lisp (of all things) for a decade, and OSX for nearly that long. They’re all the same, and that point is painfully clear to anyone who has actually used these things at a high level: I edit files, I copy files, I run programs. Huzzah.

                      But: It’s also obvious to me who has gone into the bowels of these systems as well: I wrote winback which was for a long time the only tools for doing online Windows backups of standalone exchange servers and domain controllers; I’m the author of (perhaps) the fastest Linux webserver; I wrote ml a Linux emulator for OSX; I worked on ECL adding principally CL exceptions to streams and the Slime implementation. And so on.

                      So: I understand what you mean when you say Windows is not Linux, but I also understand what the author means when he says they’re the same.

                      1. 2

                        That actually makes a ton of sense. Can I ask what would qualify as meaningfully different for you? Oberon, maybe? Or a version of Windows where WinRT was front-and-center from the kernel level upwards?

                        1. 2

                          I didn’t use the term “meaningfully different”, so I might be interpreting your question you too broadly.

                          When I used VMS, I never “made a backup” before I changed a file. That’s really quite powerful.

                          The Canon Cat had “pages” you would scroll through. Like other forth environments, if you named any of your blocks/documents it was so you could search [leap] for them, not because you had hierarchy.

                          I also think containers are very interesting. The encapsulation of the application seems to massively change the way we use them. Like the iOS example, they don’t seem to need “files” since the files live inside the container/app. This poses some risk for data portability. There are other problems.

                          I never used Oberon or WinRT enough to feel as comfortable commenting about them as I do about some of these other examples.

                      2. 2

                        If it’s any motivation I would love to read this book.

                        Do you know of any books or posts I could read in the meantime? I’m very open to the idea that Windows is nice if you know which tools and mental models to use, but kind of by definition I’m not sure what to Google to find them :)

                        1. 4

                          I’ve just been hesitant because I worked in management for two years after I started the book (meaning my information atrophied), and now I don’t work with Windows very much. So, unfortunately, I don’t immediately have a great suggestion for you. Yeah, you could read Windows Internals 6, which is what I did when I was working on the book, but that’s 2000+ pages, and most of it honestly isn’t relevant for a normal developer.

                          That said, if you’ve got specific questions, I’d love to hear them. Maybe there’s a tl;dr blog post hiding in them, where I could salvage some of my work without completing the entire book.

                      3. 7

                        but users don’t use any of the VMSy goodness in Windows: To them it’s just another shitty UNIX clone, with everything being a file or a program (which is also a file). I think that’s the point.

                        Most users don’t know anything about UNIX and can’t use it. On the UI side, pre-NT Windows was a Mac knockoff mixed with MSDOS which was based on a DOS they got from a third party. Microsoft even developed software for Apple in that time. Microsoft’s own users had previously learned MSDOS menu and some commands. Then, they had a nifty UI like Apple’s running on MSDOS. Then, Microsoft worked with IBM to make a new OS/2 with its philosophy. Then, Microsoft acquired OpenVMS team, made new kernel, and a new GUI w/ wizard-based configuration of services vs command line, text, and pipes like in UNIX.

                        So, historically, internally, layperson-facing, and administration, Windows is a totally different thing than UNIX. Hence, the difficulty moving Windows users to UNIX when it’s a terminal OS with X Windows vs some Windows-style stuff like Gnome or KDE.

                        You’re also overstating the everything is a file by conflating OS’s that store programs or something in files vs those like UNIX or Plan 9 that use file metaphor for about everything. It’s a false equivalence: from what I remember, you don’t get your running processes in Windows by reading the filesystem since they don’t use that metaphor or API. It’s object based with API calls specific to different categories. Different philosophy.

                        1. 3

                          Bitsavers has some internal emails from DEC at the time of David Cutler’s departure.

                          I have linked to a few of them.

                          David Cutler’s team at DECwest was working on Mica (an operating system) for PRISM (a RISC CPU architecture). PRISM was canceled in June of 1988. Cutler resigned in August of 1988 and 8 other DECwest alumni followed him at Microsoft.

                      4. 5

                        I have my paper copy of The Unix Hater’s Handbook always close at hand (although I’m missing the barf bag, sad to say).

                        1. 5

                          I always wanted to ask the author of The Unix Hater’s Handbook if he’s using Mac OS X

                          8~)

                          1. 5

                            It was edited by Simson Garfinkel, who co-wrote Building Cocoa Applications: a step-by-step guide. Which was sort of a “port” of Nextstep Programming Step One: object-oriented applications

                            Or, in other words, “yes” :)

                            1. 2

                              Add me to the list curious about what they ended up using. The hoaxers behind UNIX admitted they’ve been coding in Pascal on Macs. Maybe it’s what the rest were using if not Common LISP on Macs.

                          2. 7

                            Beat me to it. Author is full of it right when saying Windows is built on UNIX. Microsoft stealing, cloning, and improving OpenVMS into Windows NT is described here. This makes the Linux zealots’ parodies about a VMS desktop funnier given one destroyed Linux in desktop market. So, we have VMS and UNIX family trees going in parallel with the UNIX tree having more branches.

                            1. 4

                              The author doesn’t say Windows is built on Unix.

                              1. 5

                                “we are forced to choose from: Windows, Apple, Other (which I shall refer to as “Linux” despite it technically being more specific). All of these are built around the same foundational concepts, those of Unix.”

                                Says it’s built on the foundational concepts of UNIX. It’s built on a combo of DOS, OS/2, OpenVMS, and Microsoft concepts they called the NT kernel. The only thing UNIX-like was the networking stack they got from Spider Systems. They’ve since rewritten their networking stack from what I heard.

                                1. 4

                                  Says it’s built on the foundational concepts of UNIX.

                                  I don’t see any reason to disagree with that.

                                  The only thing UNIX-like …

                                  I don’t think that’s a helpful definition of “unix-like”.

                                  It’s got files. Everything is a file. Windows might even be a better UNIX than Linux (since UNC)

                                  Cutler might not have liked UNIX very much, but Windows NT ended up UNIX anyway because none of that VMS-goodness (Versions, types, streams, clusters) ended up in the hands of Users.

                                  1. 10

                                    It’s got files. Everything is a file.

                                    Windows is object-based. It does have files which are another object. The files come from MULTICS which UNIX also copied in some ways. Even the name was a play on it: UNICS. I think Titan invented the access permissions. The internal model with its subsystems were more like microkernel designs running OS emulators as processes. They did their own thing for most of the rest with the Win32 API and registry. Again, not quite how a UNIX programming guide teaches you to do things. They got clustering later, too, with them and Oracle using the distributed, lock approach from OpenVMS.

                                    Windows and UNIX are very different in approach to architecture. They’re different in how developer is expected to build individual apps and compose them. It wasn’t even developed on UNIX: they used OS/2 workstations for that. There’s no reason to say Windows is ground in the UNIX philosophy. It’s a lie.

                                    “Windows NT ended up UNIX anyway because none of that VMS-goodness (Versions, types, streams, clusters) ended up in the hands of Users.”

                                    I don’t know what you’re saying here. Neither VMS nor Windows teams intended to do anything for UNIX users. They took their own path except for networking for obvious reasons. UNIX users actively resisted Microsoft tech, too. Especially BSD and Linux users that often hated them. They’d reflexively do the opposite of Microsoft except when making knockoffs of their key products like Office to get desktop users.

                                    1. 3

                                      Windows is object-based.

                                      Consider what methods of that “object” a program like Microsoft Word must be calling besides “ReadFile” and “WriteFile”.

                                      That the kernel supports more methods is completely pointless. Users don’t interact with it. Programmers avoid it. Sysadmins don’t understand it and get it wrong.

                                      I don’t know what you’re saying here.

                                      That is clear, and yet you’re insisting I’m wrong.

                                      1. 3

                                        Except, that’s completely wrong.

                                        I just started Word and dumped a summary of its open handles by object type:

                                        C:\WINDOWS\system32>handle -s -p WinWord.exe
                                        
                                        Nthandle v4.11 - Handle viewer
                                        Copyright (C) 1997-2017 Mark Russinovich
                                        Sysinternals - www.sysinternals.com
                                        
                                        Handle type summary:
                                          ALPC Port       : 33
                                          Desktop         : 1
                                          Directory       : 3
                                          DxgkSharedResource: 2
                                          DxgkSharedSyncObject: 1
                                          EtwRegistration : 324
                                          Event           : 431
                                          File            : 75
                                          IoCompletion    : 66
                                          IoCompletionReserve: 1
                                          IRTimer         : 8
                                          Key             : 171
                                          KeyedEvent      : 24
                                          Mutant          : 32
                                          Process         : 2
                                          Section         : 67
                                          Semaphore       : 108
                                          Thread          : 138
                                          Timer           : 7
                                          Token           : 3
                                          TpWorkerFactory : 4
                                          WaitCompletionPacket: 36
                                          WindowStation   : 2
                                        Total handles: 1539
                                        

                                        Each of these types is a distinct kernel object with its own characteristics and semantics. And yes, you do create and interact with them from user-space. Some of those will be abstracted by lower-level APIs, but many are directly created and managed by the application. You’ll note the number of open “files” is a very small minority of the total number of open handles.

                                        Simple examples of non-file object types commonly manipulated from user-land include Mutants (CreateMutex) and Semaphores (CreateSemaphore). Perhaps the most prominent example is manipulating the Windows Registry; this entails opening “Key” objects, which per above are entirely distinct from regular files. See the MSDN Registry Functions reference.

                                        1. 0

                                          None of these objects can exist on a disk; they cannot persist beyond shutdown, and do not have any representation beyond their instantaneous in-memory instance. When someone wants an “EtwRegistration” they’re creating it again and again.

                                          Did you even read the article? Or are you trolling?

                                          1. 3

                                            None of these objects can exist on a disk; they cannot persist beyond shutdown, and do not have any representation beyond their instantaneous in-memory instance. When someone wants an “EtwRegistration” they’re creating it again and again.

                                            Key objects do typically exist on disk. Albeit, the underlying datastore for the Registry is a series of files, but you never directly manipulate those files. In the same sense you may ask for C:\whatever.txt, you may ask for HKLM:\whatever. We need to somehow isolate the different persisted data streams, and that isolation mechanism is a file. That doesn’t mean you have to directly manipulate those files if the operating system provides higher-level abstractions. What exactly are you after?

                                            From the article:

                                            But in Unix land, this is a taboo. Binary files are opaque, say the Unix ideologues. They are hard to read and write. Instead, we use Text Files, for it is surely the path of true righteousness we have taken.

                                            The Windows Registry, which is a core part of the operating system, is completely counter to this. It’s a bunch of large binary files, precisely because Microsoft recognised storing all that configuration data in plain text files would be completely impractical. So you don’t open a text file and write to it, you open a Registry key, and store data in it using one of many predefined data types (REG_DWORD, etc…).

                                            Did you even read the article? Or are you trolling?

                                            It sounds like you’re not interested in a constructive and respectful dialogue. If you are, you should work on your approach.

                                            1. -3

                                              What exactly are you after?

                                              Just go read the article.

                                              It’s about whether basing our entire interactions with a computer on a specific reduction of verbs (read and write) is really exploring what the operating system can do for us.

                                              That is a very interesting subject to me.

                                              Some idiot took party to the idea that Windows basically “built on Unix” then back-pedalled it to be about whether it was based on the same “foundational” concepts, then chooses to narrowly and uniquely interpret “foundational” in a very different way than the article.

                                              Yes, windows has domains and registries and lots of directory services, but they all have the exact same “file” semantics.

                                              But now you’re responding to this strange interpretation of “foundational” because you didn’t read the article either. Or you’re a troll. I’m not sure which yet.

                                              Read the article. It’s not well written but it’s a very interesting idea.

                                              Each of these types is a distinct kernel object with its own characteristics and semantics

                                              Why do you bring this up in response to whether Windows is basically the same as Unix? Unix has lots of different kernel “types” all backed by “handles”. Some operations and semantics are shared by handles of different types, but some are distinct.

                                              I don’t understand why you think this is important at all.

                                              It sounds like you’re not interested in a constructive and respectful dialogue. If you are, you should work on your approach.

                                              Do you often jump into the middle of a conversation with “Except, that’s completely wrong?”

                                              Or are you only an asshole on the Internet?

                                              1. 4

                                                Or are you only an asshole on the Internet?

                                                I’m not in the habit of calling people “asshole” anywhere, Internet or otherwise. You’d honestly be more persuasive if you just made your points without the nasty attacks. I’ll leave it at that.

                                      2. 2

                                        networking for obvious reasons

                                        Them being what? Is the BSD socket API really the ultimate networking abstraction?

                                        1. 7

                                          The TCP/IP protocols were part of a UNIX. AT&T gave UNIX away for free. They spread together with early applications being built on UNIX. Anyone reusing the protocols or code will inherit some of what UNIX folks were doing. They were also the most mature networking stacks for that reason. It’s why re-using BSD stacks was popular among proprietary vendors. On top of the licensing.

                                          Edit: Tried to Google you a source talking about this. I found one that mentions it.

                            1. 2

                              … and I guess this is why software is so terrible.

                              1. 7

                                Funny thing is, though – software that is not subject to the commercial incentives is still terrible.

                                1. 1

                                  How about all the free / open source infrastructure (from kernels thru daemons thru libs thru languages & compilers) the commercial sector builds their projects upon? Sure, these too are supported by corps with commercial incentives. But they’re not the “fast and cheap” apps that make it or don’t.

                                  I think we get pretty good software from people who do it for the love of doing it and are paid to keep working on it. Maybe it’s not perfect, but there’s a fair amount of software that doesn’t make me hate it all.

                                  1. 5

                                    I’m with jfb. Most OSS software has poor UI, poor documentation, poor security, and so on. It’s crap. Even if better than proprietary average, that wouldn’t be saying much since so much of it is crap. Software being crap is the default. Another phrasing is Big Ball of Mud.

                                    1. 5

                                      Part of the problem is that software quality is an aesthetic judgement, a multi-dimensional one, and so people’s views of what makes software “good” are necessarily going to vary.

                                      1. 2

                                        Well, maybe, maybe not. It’s definitely subjective in terms of what calls what people will make on it. There are objective attributes we can go by, though. Traditionally, those included things like size of modules, coupling, amount of interactions, what features were tested, how often they fail, how severe failures are, ease of making a change, ease of training new users if UX, and so on. I think the specific values considered acceptable will vary considerably project to project for good reasons. We often assess the same stuff in each one, though. Suggests some things are more objective than others.

                                        1. 3

                                          I largely agree with this, yes.

                                    2. 1

                                      I think they’re almost uniformly terrible, too.

                                      1. 1

                                        Do you have any examples of software you like? I see your POV a bit and was wondering if you had anything you liked.

                                        1. 3

                                          I liked the original Interface Builder a lot. I was also a fan of the classic Mac OS for a while. I enjoy Squeak. djb’s software is uniformly good. I use Emacs and I love it but that love is tempered by a strong dislike for emacs lisp itself. But as an environment, I couldn’t possibly surrender it.

                                          I admire a lot more software than I like – OpenBSD, for instance.

                                          ETA: Postgres, of course, is a very good piece of software.

                                          ETA’: I really, really like http://reederapp.com, an iOS/OS X RSS reader.

                                1. 4

                                  Wait, are you telling me fractional scaling actually works in Gnome on Fedora?!

                                  (It doesn’t on Ubuntu, and it’s been keeping me in a state of stunned amazement that they’ve been shipping a desktop unusable on mainstream hardware for two consecutive releases now, and none of the reviewers have given it as much as a sideline mention. I guess I’m the only person in the world trying to run Ubuntu on an exceedingly rare Thinkpad X1 Carbon.)

                                  1. 2

                                    It doesn’t really. It just renders everything at one size larger than you need, and then uses in-GPU scaling.

                                    The same approach that iOS and macOS took, and the complete opposite of the Windows, Qt, Android, and HTML 5 approach.

                                    1. 2

                                      It’s horses for courses; both approaches have their merits.

                                      1. 1

                                        Well, as long as it works, I’m fine :-)

                                        I don’t know how Unity does it (which is what I’m using now), but I suspect it’s essentially the same, and it does look crisp at any scale factor.

                                    1. 14

                                      Making the best of my paternity leave and starting a 2 months bike trip with the whole family.

                                      See you all in August :-)

                                      1. 5

                                        Paternity leave is great.

                                      1. 2

                                        One of these days I’ll have to get my Octane up and running again. I finally tracked down an SCA drive, but it looks like I’ll either have to 3D-print or whittle an artisanal drive sled out of deadfall.

                                        1. 2

                                          Even whittling probably produces tighter tolerances than what SGI was shipping BITD. I lost more sleep than I care to remember over badly machined drive sleds, to say nothing of the software, which was hellaciously fast, particularly at dumping core and rebooting; to be fair, most of those big boxen I used to herd would go tits up twice a day, so IRIX was under serious selective pressure to “reboot quickly”.

                                          Such, such were the joys.

                                        1. 2

                                          Great googly-moogly.

                                          1. 3

                                            Disclaimer: I am not a Twitter user.

                                            Is anybody surprised? Twitter is an advertising play, and allowing third-party access is at this point, strictly a cost to be borne. I know that early adopters think of technologies as somehow “theirs”, but really, Twitter is the advertisers’ first, the shareholders second, and that’s it.

                                            1. 3

                                              There isn’t even a product here. This is just somebody talking about one day maybe building a thing. It is bad advertising for a space already full of hype and bunk.

                                              1. 1

                                                HTC is so desperate that they offered this clown some BIGNUM dollarbux to spin bafflegab. Not really news.

                                              1. 3

                                                The author’s views on pragmatism (do one thing well, but maybe do these other related things too) are a good reflection of the “real” state of Unix. The Unix “philosophy” is not so much the guiding thought process that defined the creation of Unix as a collection of rationalisations to explain the things that were created, but every description of the philosophy has to silently ignore, or apologise for, the outliers. Yes, it’s impure that tar has compression support, but also it works differently from ar when both are archive tools. Why does find not look like a Unix tool, why does X not look like a Unix tool (it was a port), why does ls have so many options about tabulating and sorting output, you hand-wave past them or ignore them. Here the answer is the one that explains the rest of Unix’s success: that worse is better.

                                                1. 0

                                                  Yes. The whole “Unix philosophy” is post-hoc rationalization.

                                                1. 8

                                                  Per the MySQL docs: the CHECK clause is parsed but ignored by all storage engines.

                                                  Link is to 5.7 docs, but that’s still the case in 8.0. Ridiculous. At least MariaDB does support them now.

                                                  MySQL’s boolean is actualy an alias for TINYINT(1). This is why query results show 0 or 1 instead of true or false. It’s also why you can set the value of an ostensibly boolean field to 2. Try it!

                                                  LOL.

                                                  With MySQL, you’re stuck with calling LAST_INSERT_ID() after you add a new record

                                                  I wonder if that thing respects transaction semantics…

                                                  I think there could still be reasons to pick MySQL – but I’m not sure they could be technical.

                                                  Yeah. Since forever, the biggest reason has been “my shared hosting offers MySQL and my PHP CMS depends on MySQL” :)

                                                  1. 2

                                                    MySQL’s boolean is actualy an alias for TINYINT(1). This is why query results show 0 or 1 instead of true or false. It’s also why you can set the value of an ostensibly boolean field to 2. Try it!

                                                    What is it with databases I hate and BOOLEAN types? The contortions that Uncle Tom goes through to justify its lack in Oracle are hilarious, if you don’t have to deal with Oracle.

                                                    1. 0

                                                      | “my shared hosting offers MySQL and my PHP CMS depends on MySQL” :)

                                                      “I started using mysql, got converted to mariadb with debian and am now used to the problems of my DBMS” ;) I find mariadb quite a good improvement. Sometimes I’m amazed how well it can keep up with my horrible sql queries.

                                                    1. 4

                                                      I mean, this is really about using free software on ones phone —certainly a laudable enough goal, but I achieved de-Googling by not buying a Google phone in the first place?

                                                      1. 1

                                                        you would expect YouTube videos to be a slideshow (no JIT, little or no SIMD), and yet they play at a surprisingly good framerate

                                                        Video playback is all native, JS JIT doesn’t have anything to do with it o_0

                                                        1. 2

                                                          Yeah, and decoding H.264 or VP9 at YouTube data rates just isn’t a lot of work for a modern CPU, regardless of how poorly optimized the software stack is.

                                                          1. 4

                                                            Idk about HD but my G4 Mac on old version of OS handled Youtube videos. Framerate wasnt as good. Fan blowing loudly. You could tell it was struggling but it did it with performance Id tolerate.

                                                            POWER9 better be able to handle heavier loads than an $80 laptop from 2003. Haha.

                                                            1. 1

                                                              One would hope, yes.

                                                        1. 6

                                                          I had a JVM stack trace today that was 2.2MB in size. I mean, thank goodness the JVM doesn’t do tail-call elimination! Think about how confusing that would be if stack calls were elided!

                                                          ETA: It was 28,092 lines, but:

                                                          % tr ' ' '\n' < java-barf  | sort -u |wc -l
                                                          

                                                          yields only 405 unique ‘words’. That’s … impressive? I guess?

                                                          1. 10

                                                            I enjoyed this, but it did make me wonder – what would a true low-level language designed today actually look like? I’ll hang up and take your answers over the air.

                                                            1. 5

                                                              If I’m reading the article’s premise properly, the author doesn’t even consider assembly language to be ‘low level’ on modern processors, because the implicit parallel execution performed by speculative execution is not fully exposed as controllable by the programmer. It’s an interesting take, but I don’t think anybody other than the author would use “low level” to mean what he does.

                                                              That said, if we were to make a language that met the author’s standards (let’s say “hardware-parallelism-transparent” rather than “low-level”), we’d probably be seeing something that vaguely resembled Erlang or Miranda in terms of how branching worked – i.e., a lot of guards around blocks of code rather than conditional jumps (or, rather, in this case, conditional jumps with their conditions inverted before small blocks of serial code).

                                                              People later in the thread are talking about threading & how there’s no reason threading couldn’t be put into the C standard, but threading doesn’t appear to be the kind of parallelism the author is concerned about exposing. (To be honest, I wonder if the author has a similarly spicy take on microcode, or on firmware, or on the programmability of interrupt controllers!)

                                                              He seems to be saying that, because we made it easy to ignore certain things that were going on in hardware (like real instructions being executed and then un-done), we were taken off-guard by the consequences when a hole was poked in the facade in the form of operations that couldn’t be hidden in that way. I don’t think that’s a controversial statement – indeed, I’m pretty sure that everybody who makes compatibility-based abstractions is aware that such abstractions become a problem when they fail.

                                                              He suggests that the fault lies in not graduating to an abstraction closer to the actual operation of the machine, which is probably fair, although chip architectures in general and x86 in particular are often seen as vast accumulations of ill-conceived kludges and it is this very bug-compatibility that’s often credited with x86’s continued dominance even in the face of other architectures that don’t pretend it’s 1977 when you lower the reset pin and don’t require trampolining off three chunks of arcane code to go from a 40 bit address bus and 16 bit words to 64 bit words.

                                                              People don’t usually go as far as to suggest that speculative execution should be exposed to system programmers as something to be directly manipulated, and mechanisms to prevent this are literally part of the hardware, but it’s an interesting idea to consider, in the same way that (despite their limitations) it’s interesting to consider what could be done with one of those PPC chips with an FPGA on the die.

                                                              The quick and easy answer to what people would do with such facilities is the same as with most forms of added flexibility: most people will shoot themselves in the foot, a few people would make amazing works of art, and then somebody would come along and impose standards that limit how big a hole in your foot you can shoot and it’d kill off the artworks.

                                                              1. 4

                                                                Probably a parallel/concurrent-by-default language like ParaSail or Chapel with C-like design as a base to plug into ecosystem designed for it. Macros for DSL’s, too, since they’re popular for mapping stuff to specific hardware accelerators. I already had a Scheme + C project in mind for sequential code. When brainstorming on parallel part, mapping stuff from above languages onto C was the idea. Probably start with something simpler like Cilk to get feet wet, though. That was the concept.

                                                                1. 1

                                                                  Or maybe it would look like Rust.

                                                                  1. 8

                                                                    The article’s point is that things are parallel by default at multiple levels, there’s different memories with different performance based on locality, orderings with consistency models, and so on. The parallel languages assume most of that given they were originally designed for NUMA’s and clusters. They force you to address it with sequential stuff being an exception. They also use compilers and runtimes to schedule that much like compiler + CPU models.

                                                                    Looking at Rust, it seems like it believes in the imaginary model C does that’s sequential, ordered, and so on. It certainly has some libraries or features that help with parallelism and concurrency. Yet, it looks sequential at the core to me. Makes sense as a C replacement.

                                                                    1. 3

                                                                      But, Rust is the only new low-level language I’m aware of, so empirically: new low-level languages look like Rust.

                                                                      Looking at Rust, it seems like it believes in the imaginary model C does that’s sequential, ordered, and so on.

                                                                      To be fair, the processor puts a lot of effort into letting you imagine it. Maybe we don’t have languages that look more like the underlying chip is because it’s very difficult to reason about.

                                                                      Talking out of my domain here: but the out of order stuff and all that the processor gives you is pretty granular, not at the whole-task level, so maybe we are doing the right thing by imagining sequential execution because that’s what we do at the level we think at. Or, maybe we should just use Haskell where order of execution doesn’t matter.

                                                                      1. 3

                                                                        How does rust qualify as “low level”?

                                                                        1. 1

                                                                          From my understanding, being low-level is one of the goals of the project? Whatever “low-level” means. It’s certainly trying to compete where one would use C and C++.

                                                                          1. 3

                                                                            But does rust meet the criteria for low level that C does not (per the link)?

                                                                            1. -1

                                                                              The Rust wikipedia page claims that Rust is a concurrent language, which seems to be a relevant part of the blog. I don’t know if Rust is a concurrent language, though.

                                                                              1. 3

                                                                                I think you’re probably putting too much faith in Wikipedia. With that said, I must confess, I have no insight into the decision procedure that chooses the terms to describe Rust in that infobox.

                                                                                One possible explanation is that Rust used to bake lightweight threads into its runtime, not unlike Go. Go is also described as being concurrent on Wikipedia. To that end, the terms are at least consistent, given where Rust was somewhere around 4 years ago. Is it possible that the infobox simply hasn’t been updated? Or perhaps there is a turf war? Or perhaps there are more meanings to what “concurrent” actually signifies? Does having channels in the standard library mean Rust is “concurrent”? I dunno.

                                                                                Rust has stuff in the type system to eliminate data races in safe code. Separate from that, there are some conveniences that help avoid deadlock (e.g., you typically never explicitly unlock a mutex). But concurrency is definitely not built into the language like it is for Go.

                                                                                (I make no comment on Rust’s relevance to the other comments in this thread, mostly because I don’t give a poop. This navel gazing about categorization is a huge unproductive waste of time from my perspective. ’round and ’round we go.)

                                                                                1. 1

                                                                                  Pretty sure having a type system designed to prevent data races makes Rust count as “concurrent” for many (including me).

                                                                                  1. 3

                                                                                    The interesting bit is that the type system itself wasn’t designed for it. The elimination of data races fell out of the ownership/aliasing model when coupled with the Send and Sync traits.

                                                                                    The nomicon has some words on the topic, but that section gets into the weeds pretty quickly.

                                                                                    1. 1

                                                                                      I see where you are going with that. The traditional use of it was expressing concepts in a concurrent way. It had to make that easier. The type system eliminates some problems. It’s a building block one can use for safe concurrency with mutable state. It doesn’t by itself let you express things in a concurrent fashion easily. So, they built concurrency frameworks on top of it. A version of Rust where the language worked that way by default would be a concurrent language.

                                                                                      Right now, it looks to be a sequential, multi-paradigm language with a type system that makes building concurrency easier. Then, the concurrency frameworks themselves built on top of it may be be thought similar to DSL’s that are concurrent. With that mental model, you’re still using two languages: a concurrent one along with a non-concurrent, base language. This is actually common in high assurance where they simultaneously wrote formal specs in something sequential like Z and CSP for concurrent stuff. The concurrent-by-default languages are the rare thing with sequential and concurrent usually treated separately in most tools.

                                                                          2. 2

                                                                            If exploring such models, check out HLL-to-microcode compilers and No Instruction Set Computing (NISC).

                                                                          3. 1

                                                                            Interestingly, the Rust wikipedia page makes a bit deal about it trying to be a “concurrent” language. Apparently it’s not delivering if that is the major counter you gave.

                                                                            1. 2

                                                                              Occam is an example of a concurrency-oriented language. The core of it is a concurrency model. The Rust language has a design meant to make building safe concurrency easier. Those frameworks or whatever might be concurrency-oriented. That’s why they’re advertised as such. Underneath, they’re probably still leveraging a more sequential model in base language.

                                                                              Whereas, in concurrency- or parallelism-first languages, it’s usually the other way around or sequential is a bit more work. Likewise, the HDL’s the CPU’s are designed with appear to be concurrency-first with them beating the designs into orderly, sequential CPU’s.

                                                                              So, Im not saying Rust isnt good for concurrency or cant emulate that well. Just it might not be that at core, by default, and easiest style to use. Some languages are. That make more sense?

                                                                              1. 0

                                                                                Yes I know all that, my point was that the wikipedia page explicitly states Rust is a concurrent language, which if true means it fits into the idea of this post.

                                                                          4. 3

                                                                            Does Rust do much to address the non-sequential nature of modern high-performance CPU architectures, though? I think of it as a modern C – certainly cleaned up, certainly informed by the last 50 years of industry and academic work on PLT, but not so much trying to provide an abstract machine better matched to the capabilities of today’s hardware. Am I wrong?

                                                                            1. 3

                                                                              By the definitions in the article, Rust is not a low level language, because it does not explicitly force the programmer to schedule instructions and rename registers.

                                                                              (By the definitions in that article, assembly is also not a low level language.)

                                                                              1. 1

                                                                                Ownership semantics make Rust higher-level than C.

                                                                                1. 3

                                                                                  I disagree:

                                                                                  1. Parallelism would make whatever language higher-level than C too but the point seems to be that a low-level language should have it.
                                                                                  2. Even if true, ownership is purely a compile-time construct that completely disappears at run-time, so there is no cost, so it does not get in the way of being a low-level language.
                                                                                  1. 2

                                                                                    Parallelism would make whatever language higher-level than C too but the point seems to be that a low-level language should have it.

                                                                                    This premise is false: Parallelism which mirrors the parallelism in the hardware would make a language lower-level, as it would better mirror the underlying system.

                                                                                    Even if true, ownership is purely a compile-time construct that completely disappears at run-time, so there is no cost, so it does not get in the way of being a low-level language.

                                                                                    You misunderstand what makes a language low-level. “Zero-cost” abstractions move a language to a higher level, as they take the programmer away from the hardware.

                                                                            2. 2

                                                                              I came across the X Sharp high-level assembler recently, I don’t know if it’s low-level enough for you but it piqued my interest.

                                                                              1. 2

                                                                                There’s no point of a true low-end language, because we can’t access the hardware at that level. The problem (in this case) isn’t C per se, but the complexity within modern chips that are required to make them pretend to be a gussied-up in-order CPU circa 1993.

                                                                              1. 1
                                                                                1. 17

                                                                                  Gimp 2.10 is the release that stops dumping stuff into $HOME, so even if you don’t really care about the new features, it’s a worthwhile upgrade for this reason alone!

                                                                                  1. 4

                                                                                    Why is that so important to you?

                                                                                    1. 18

                                                                                      It’s very impolite for programs to spam things into the $HOME directory without explicit permission.

                                                                                      1. 3

                                                                                        Yes, it’s extremely rude.

                                                                                        1. 0

                                                                                          Can’t decide whether this is sarcasm, but 😆

                                                                                        2. 2

                                                                                          I want to be able to do backups or blow away the cache without inspecting each and every .folder individually.

                                                                                        3. 4

                                                                                          Do you have a link to further information? “Ctrl+F HOME” in the release notes didn’t turn up anything relevant.

                                                                                          1. 2

                                                                                            It is mentioned here and here. Hope this helps!