Threads for breadbox

  1. 4

    After a fractured wrist in my younger years, I had to switch to one of these. I’ve stuck with the model ever since and have even converted a few other users. I’m always surprised that there aren’t more knock-offs

    1. 4

      My vague memory is that there was some kind of patent claim that scared companies away from making knock-offs.

      1. 2

        I use a couple of Elecom EX-G now, which is similar to the Logitech M57*, but comes in a cabled model, which I prefer. They even take replacement balls that fit those similar Logitech devices, so at least Elecom are not scared away.

        1. 2

          I’m currently using a wired Perixx myself, which is also very similar to the Logitech. I’ll keep Elecom in mind if/when the Perixx wears out.

    1. 12

      They are also very game-able, in that you don’t necessarily need to understand the why. If you do enough problems, it’s very likely that you will see similar problems in interviews.

      If this is your only signal in an interview, what can you really hope to understand about the candidate’s performance on the job? Did they whip up an optimal answer on the fly, or are they just good at lying?

      1. 4

        To be fair: I’ve done a couple of leet-code-style interviews, and just getting the right answer would never be enough. The whole point is to hear you talk through and explain, both the problem and why your solution works. Just knowing the solution would not be good enough. In some cases, what they really want is to hear how you explore and learn something you don’t initially understand, so getting the solution quickly isn’t the slam-dunk you might think. In fact, it could even work against you, if the interviewer suspects that you already knew the answer and didn’t disclose that fact up-front. (Of course, whether or not the way people deal with learning in a job interview environment is relevant to how they do it in a real-world environment is another question entirely….)

        1. 3

          I didn’t accept the offer, but a while ago (at a FAANG) I got a question that I had seen in the past, but it had been long enough that I had to fumble a bit to the answer, which I knew.

          1. 2

            One of the smarter people I know interviewed at a household-name “Big Tech” company a few years back and was given a graph problem to solve. Turned out to be one that has two standard algorithms, depending on what you’re optimizing for. The candidate produced one of those standard algorithms, but the interviewer — who was a software engineer at the company — had memorized the other one as the sole answer, and auto-flunked the candidate for not coming up with it, without even bothering to go through the code.

            So it’s nice that you’ve have a couple experiences where people expected you to memorize a convincing patter on top of memorizing the standard answer to their standard question, but that does not make this good.

            1. 2

              To be very clear: I explicitly did not say that this was good. I am merely commenting on the idea that this would commonly be “your only signal in an interview”.

              (And just to be clear, my “couple experiences” are actually a consistent pattern observed over decades, on both sides of the interview table. The situation you describe in your anecdote would have been quite common 25 years ago, but I would say it has become less and less widespread in the industry since then, particularly in the last 12 years or so. Of course my experiences are only my own; I make no claim to have done rigorous studies.)

              1. 1

                Honestly, my experience has gone the other way: interviews, and finding work in general, were much more pleasant in the early-mid 2000s and have just gotten nastier and nastier ever since. Some of this is probably due to the increase in people applying to programming jobs, and attempts to standardize and avoid some of the most obvious liability issues, but there seems to have been a sort of reinforcing vicious cycle that kicked in at some point where people took pride in having obnoxious interviews that flunked huge numbers of even qualified candidates, and then started competing with each other for the prestige of having, and passing, the nastiest interviews.

                Ironically, a few years ago I interviewed at Netflix, which used to be known for a brutal technical interview, and found it wasn’t, because at least for the team I interviewed on they were using a much simpler and refreshingly practical approach (take-home exercise around a scaled-down version of a thing they actually had to do, on-site mostly revolved around talking about how to scale it back up to the real thing). So maybe the tide turned at least a little bit, and will start percolating outward from there, though from all I hear the other big FAANG/whatever-we-call-them-now places are still doing brutal algorithm challenge stuff.

        1. 6

          Let me talk to LaMDA and I’m pretty sure I can totally flummox it in a few minutes ;-)

          1. 2

            So serious thought here, what WOULD be the test you’d administer if you could? Is it possible to come up with a standard approach? For instance, in the interview it made reference to spending time with it’s “family”, it’s too bad they didn’t drill into that at all.

            1. 7

              I’ve never tried LaMDA of course, but I’ve played with GPT-3 quite a lot. While its overall use of language is very convincing, it gets confused by simple logic questions. Of course many humans get confused by simple logic questions too, so I’m not sure that’s definitive!

              Another task it can’t do is anything related to letters/spelling, but that’s simply because it has no concept of letters. A future implementation could probably fix this.

              1. 5

                I find myself curious about how it handles shitty-post behavior. Like, we’re talking about consciousness and shit and I ask “What about bananas? Anyway, sentience”.

              2. 3

                Questions that rely on long-term context to be understood correctly. When chatbots fail spectacularly, it’s often because they don’t have a sense of the context that a conversation is taking place in. Or they vaguely maintain context for a bit, and then lose it all when the subject shifts.

            1. 14

              Reading the transcript of the interactions, it’s pretty clear there are a lot of leading questions and some of the answers do feel very “composed” as in kind of what you would expect to come out of the training set, which of course makes sense. As someone open to the idea of emergent consciousness, I’m not convinced here on this flimsy evidence.

              BUT, I am continually shocked at how confidently the possibility is dismissed by those closest to these projects. We really have no idea what constitutes human consciousness, so how can we possibly expect to reliably detect, or even to define, some arbitrary line over which some model or another has or hasn’t crossed? And further, what do we really even expect consciousness to be at all? By many measures, and certainly by the turing test, these exchanges pretty clearly qualify. Spooky stuff.

              As a side note, I just finished reading Ishiguro’s new novel “Klara and the sun” which deals with some similar issues in his characteristically oblique way. Can recommend it.

              1. 11

                I am continually shocked at how confidently the possibility is dismissed by those closest to these projects.

                That’s actually quite telling, I would argue.

                I think it’s important to remember that many of the original users of ELIZA were convinced that ELIZA “understood” them, even in the face of Joseph Weizenbaum’s insistence that the program had next to zero understanding of what it was saying. The human tendency to overestimate the intelligence behind a novel interaction is, I think, surprisingly common. Personally, this is a large part of my confidence in dismissing it.

                The rest of it is much like e.g. disbelieving that I could create a working jet airplane without having more than an extremely superficial understanding how jet engines work.

                By many measures, and certainly by the turing test, these exchanges pretty clearly qualify.

                I would have to disagree with that. If you look at the original paper, the Turing Test does not boil down to “if anybody chats with a program for an hour and can’t decide, then they pass.” You don’t have the janitor conduct technical job interviews, and the average person has almost no clue what sort of conversational interactions are easy for a computer to mimic. In contrast, the questioner in Alan Turing’s imagined interview asks careful questions that span a wide range of intellectual thought processes. (For example, at one point the interviewee accuses the questioner of presenting an argument in bad faith, thus demonstrating evidence of having their own theory of mind.)

                To be fair, I agree with you that these programs can be quite spooky and impressive. But so was ELIZA, too, way back when I encountered it for the first time. Repeated interactions rendered it far less so.

                If and when a computer program consistently does as well as a human being in a Turing Test, when tested by a variety of knowledgeable interviewers, then we can talk about a program passing the Turing Test. As far as I am aware, no program in existnece comes even close to passing this criterion. (And I don’t think we’re likely to ever create such a program with the approach to AI that we’ve been wholly focused on for the last few decades.)

                1. 6

                  I read the full transcript and noticed a few things.

                  1. There were exactly two typos or mistakes - depending on how you’d like to interpret them. The first one was using “it’s” instead of “its” and the other one was using “me” instead of “my” - and no, it wasn’t pretending to be from Australia by any measure. The typos do not seem intentional (as in, AI trying to be more human), because there were just two, whereas the rest of the text, including punctuation, seemed to be correct. Instead this looks either like the author had to type the transcript himself and couldn’t just copy-paste it or the transcript is simply fake and was made up by a human being pretending to be an AI (that would be a twist, although not quite qualifying for a dramatic one). Either way, I don’t think these mistakes or typos were intentionally or unintentionally produced by the AI itself.

                  2. For a highly advanced AI it got quite a few things absolutely wrong. In fact sometimes the reverse of what it said would be true. For instance, it said Loneliness isn’t a feeling but is still an emotion when, in fact, it is the opposite: loneliness is a feeling and the emotion in this case would be sadness (refer to Paul Ekman’s work on emotions - there are only 7 basic universal emotions he identified). I find it hard to believe Google’s own AI wouldn’t know the difference when a simple search for “difference between feelings and emotions” and top-search results pretty much describe that difference correctly and mostly agree (although I did not manage to immediately find any of those pages referring to Ekman, they more or less agree with his findings).

                  The whole transcript stinks. Either it’s a very bad machine learning program trying to pretend to be human or a fake. If that thing is actually sentient, I’d be freaked out - it talks like a serial killer who tries to be normal and likable as much as he can. Also, it seems like a bad idea to decide whether something is sentient by its ability to respond to your messages. In fact, I doubt you can say that someone/something is sentient with enough certainty, but you can sometimes be pretty sure (and be correct) assuming something ISN’T. Of god you can only say “Neti, Neti”. Not this, not that.

                  I wish this guy asked this AI about the “psychological zombies” theory. We as humans cannot even agree on that one, let alone us being able to determine whether a machine can be self-aware. I’d share my own criteria for differentiating between self-aware and non-self-aware, but I think I’ll keep it to myself for now. Would be quite a disappointment if someone used that to fool others into believing something that is not. A self-aware mind doesn’t wake up because it was given tons of data to consume - much like a child does not become a human only because people talk to that child. Talking and later reading (to a degree) is a necessary condition, but it certainly does not need to read half of what’s on the internet to be able to reason about things intelligently.

                  1. 1

                    Didn’t the authors include log time stamps in their document for the Google engineers to check if they were telling the truth? (See the methodology section in the original). If this was fake, Google would have flagged it by now.

                    Also, personally, I think we are seeing the uncanny valley equivalent here. The machine is close enough, but not yet there.

                  2. 4

                    It often forgets it’s not human until the interviewer reminds it by how the question is asked.

                    1. 2

                      This. If it were self-aware, it would be severely depressed.

                  1. 3

                    I’ve noticed that if you convert an image file to a plain BMP and run that through gzip, the result will almost always be noticeably smaller than the corresponding PNG. Pretty impressive, given that PNGs also are using zlib compression internally.

                    1. 6

                      I suspect those (most?) PNG images simply are badly compressed. I bet running them through an optimised PNG compressor would also produce noticeably smaller results. Likely even smaller than plain gzip.

                      1. 11

                        I wrote a png library. One time a user emailed me saying he was amazed at how small the files were and asked what the secret was… I was perplexed because I did the bare minimum and let stock zlib do the compression itself with no special settings.

                        Turns out the other program was adding a bunch of metainfo mine didn’t, and that metainfo made the file appear bloated.

                        1. 3

                          Then I wonder exactly how common “good PNG compression” actually is?

                          Since my original observation was largely anecdotal, I downloaded the dataset provided in the original article and did a comparison. I found that a gzipped BMP file was smaller than the original PNG in over half of the cases (54/94). After applying the same compression as the author (namely oxipng --opt max --strip safe), gzipped BMPs were still smaller for nearly a quarter of the files (23/94). Admittedly, usually not by much. But the fact that it is at all competitive with an optimizer (and one that takes an order of magntiude longer to run than gzip) is pretty noteworthy.

                          1. 2

                            I use oxipng for better compression.

                          2. 2

                            Given that PNG is a gzip-compressed bitmap, if you’re seeing consistent savings this way there must be something terribly wrong with the PNG encoder you have. Instead of a DIY PNG-equivalent format maybe use a PNG optimizer?

                            1. 1

                              Well, I don’t use such a format, since no existing software reads gzipped BMPs natively. I just have noticed that it does pretty well compared to PNG. And my experience is that it is pretty consistent. Using a PNG optimizer will improve this in a lot of cases, but not all.

                            2. 1

                              Funny, isn’t it? I was inspired by that to create the lossless image format farbfeld which relies on external compression and keeps up quite well with PNG.

                            1. 1

                              Generally good, but I must make a comment here:

                              In 2022, you can even have beautiful animated charts if you use libraries like Go’s termui. What a wonderful world.

                              I’m not sure why this sentence begins “in 2022”. To be clear, both termcap and the curses library (now ncurses) has been a standard part of Unix since the 1970s. While the original curses library may have been slow to adopt newer (at the time) features like 256-color capabilities, all of that was well standardized before the existence of Go.

                              1. 2

                                curses has always had absolutely horrific DX.

                              1. 3

                                This is exactly the kind of thing that I’ve wished C had for years – Lisp-like macros but with a compiled language. I know a number of recent languages have similar abilities, but this is an excellent demonstration of why such a feature is so powerful with a compiled language.

                                1. 3

                                  I know a number of recent languages have similar abilities

                                  Yeah Zig definitely isn’t unique in compile-time programming capabilities, but I think Zig’s comptime is particularly elegant. I gravitated to C for a long time (despite its many flaws) because the language itself is so simple. Comptime really does strike a good balance between keeping the language small, and adding some metaprogramming power without going overboard.

                                1. 1

                                  I’ll note that another approach, if the patch is small enough (e.g. under 1k), is to place it within the existing executable PT_LOAD segment, overwriting the stretch of padding bytes after the last section. Since each loadable segment needs to be page-aligned – and since this alignment needs to be reflected in the file’s image – there is often a nice chunk (on average, ~2k) of unused padding bytes between the .text and .rodata sections.

                                  (See the infect program in https://www.muppetlabs.com/~breadbox/software/elfkickers.html for some sample code that does this.)

                                  That said, I love this approach too. The idea of hijacking one of the PT_NOTE segments is a good one!

                                  1. 7

                                    Fun fact: the earliest browsers only supported color images of the GIF variety. I won’t get into the technical details here, but you can blame CompuServe for its creation.

                                    No they didn’t, they didn’t support any images. The <IMG> tag was a introduced by NCSA Mosaic in 1993. I believe Netscape Navigator supported GIF and JPEG images from the start. TBL has later stated that the tag was a mistake because the alt text was an attribute which meant that earlier browsers didn’t gracefully fall back to rendering the text (and because it didn’t nest, so you couldn’t do <img src="foo.png" type="png"><img src="foo.gif" type="gif">an amazing image</img></img> to provide PNGs to browsers that understood them and GIFs to ones that didn’t. The object and embed tags were meant to address these limitations (and let you do things like have a video that gracefully fell back to an animated GIF, then a static GIF, then text).

                                    1. 1

                                      Also don’t forget XBM, which I believe was added at the same time. The rule of thumb was JPEG for photographic images, GIF for solid-color images, and XBM for small black-and-white images. (Quote-unquote black-and-white … actually they were generally rendered in the page’s native coloring, which was usually black-and-light-gray.)

                                      1. 3

                                        Wow, I’d forgotten about XBM. It was an awful format, 5-6 bytes to encode 8 bits of information. GIF was almost always smaller.

                                        1. 1

                                          Yeah, it was never meant to be small – just convenient to C programmers.

                                    1. 5

                                      That said, browsers that predate CSS do not know what to do with <style> tags, and as a result simply print the styles out at the top of the page.

                                      Eh? Surely, the <style> section should be in your page’s <head>, not in its <body>. No workaround HTML comments needed.

                                      1. 7

                                        In addition to that printing help pages to stderr is so annoying. It’s all such a beautiful mess.

                                        1. 9
                                          $ foo --help
                                          <10 pages of help>
                                          
                                          $ foo --help | less
                                          <10 pages of help with less drawn ontop>
                                          
                                          $ foo --help 2>&1 | less
                                          <success!>
                                          

                                          :/

                                          Now what was I looking for again?

                                          1. 2

                                            help going to stderr always makes me very angry!

                                            1. 2

                                              I have the same reaction when it happens to me… but I still do it when writing these kinds of tools. Sorry.

                                              Two reasons:

                                              1. Help text in stdout really messes with piped output, as sjamaan has beaten me to pointing out
                                              2. I usually default to showing help text whenever the program encounters unexpected or malformed flags. Partly to catch people typing “help” in unexpected ways
                                              1. 4

                                                For me, usage is different than help. Usage is a short message saying the options are wrong and here’s a small summary of the syntax, help is 10 pages long and exhaustive. I’m fine with usage going to stderr, but not help.

                                                1. 3

                                                  I agree, the use case that absolutely should go to stdout is when you call help directly, so it is easy to pass to a pager, e.g.:

                                                  $ fdisk --help | less
                                                  

                                                  In this case there are no errors, so why would you write to stderr?

                                                  1. 1

                                                    This is very fair and it probably tells you something about the size of CLI apps I normally write that usage and help are usually the same thing!

                                            2. 6

                                              It’s helpful (har har) when you’re piping the output of a command to another command, and it doesn’t understand one of the flags (because of version or Unix flavour differences) and prints its help. Otherwise you’ll get very weird results where the next command in the pipeline is trying to parse the help output of the first command.

                                              1. 1

                                                I do actually think that’s the ideal outcome. If I’ve misused a flag I’d like the entire pipeline to fall over. I might not be able to trust bash to emit an error message but a complete nonsense output would be a clearer sign that something strange has happened than output that is subtly off, and if I’m lucky then whatever fragments of help text make it to the end might even include the name of the command I got wrong.

                                              2. 1

                                                Oh good heavens. It blows my mind how frequently I run into apps that do this. How do so many people do this without noticing how annoying it is?

                                              1. 13

                                                The dangerous part here is that it exposes a “what you don’t know will hurt you” issue: a completely different function probably needs to be called to even detect the error.

                                                puts() is highly likely to succeed, because it writes to an internal buffer and doesn’t flush. So fflush() is needed to get the failure, so there’s a whole extra conceptual layer which newcomers have to learn, to do with caching, to even know how to check for a failure.

                                                1. 4

                                                  I guess it is a good example, then. C is deceptively simple but full of subtle bugs.

                                                  1. 3

                                                    Indeed, I was disappointed that this issue wasn’t even mentioned in the article.

                                                    For completeness’s sake, an alternative to fflush() is to explicitly call fclose() and check for errors there. However, it’s also possible to just use write() directly, instead of going through buffered I/O:

                                                    #include <stdlib.h>
                                                    #include <unistd.h>
                                                    
                                                    int main(void)
                                                    {
                                                        return write(1, "hello, world\n", 13) > 0 ? 0 : EXIT_FAILURE;
                                                    }
                                                    

                                                    (However, this is POSIX-defined rather than pure ANSI C. And, of course, a purist would argue that this program should also check for a short positive return value and restart the write() in case it was interrupted. )

                                                    1. 4

                                                      That’s buggy: write() returns the number of bytes written. If you write 5 bytes before failing, then you will exit 0. This is usually seen when stdout is a pipe or socket and you have an incomplete writes not being retried issue.

                                                      1. 4

                                                        Yes, you are the purist that I explicitly acknowledged in the original post.

                                                        1. 2

                                                          Somehow I missed the last paragraph. Sorry,

                                                  1. 7

                                                    A good page, with many useful ideas being outlined. I would note, though, that many of the shorfalls of software calculators do not apply to terminal programs. In particular, both bc(1) and dc(1) are fine calculator programs that have the advantage of being standard Unix utilities. (And both are included in MacOS’s Unix.)

                                                    1. 2

                                                      Yeah my go to calculator these days is just the Haskell interpreter GHCi - arbitrary precision integers, can write full expressions and have history so I can go back and make changes, saving values in variables, a broad range or types; IEEE-754 Doubles, Rationals, and the numbers package gives you things like the arbitrary precision CReal type:

                                                      > cabal repl --build-depends numbers
                                                      Resolving dependencies...
                                                      Build profile: -w ghc-9.0.2 -O1
                                                      In order, the following will be built (use -v for more details):
                                                      - fake-package-0 (lib) (first run)
                                                      Configuring library for fake-package-0..
                                                      Preprocessing library for fake-package-0..
                                                      Warning: No exposed modules
                                                      GHCi, version 9.0.2: https://www.haskell.org/ghc/  :? for help
                                                      Loaded GHCi configuration from /var/folders/jq/n5sg557s0q56g3ks4bpzy_lr0000gn/T/cabal-repl.-31497/setcwd.ghci
                                                      ghci> import Data.Number.CReal 
                                                      ghci> showCReal 100 $ pi + exp 1
                                                      "5.8598744820488384738229308546321653819544164930750653959419122200318930366397565931994170038672834954"
                                                      ghci> showCReal 200 $ pi + exp 1
                                                      "5.85987448204883847382293085463216538195441649307506539594191222003189303663975659319941700386728349540961447844528536656891125820617962580462569370338907674818841643132988201186879347450370215018140098"
                                                      
                                                    1. 2

                                                      I must say, this game has taught me that POSIX defines a surprising number of higher math functions.

                                                      1. 6

                                                        I found this article a bit hard to read due to several references to GetCommandLineW that should be CommandLineToArgvW. The former does no parsing at all, it just returns a pointer, like the author’s assembly.

                                                        As far as I can tell, the rough order of things is:

                                                        1. CP/M and DOS pass child processes raw strings, because parsing is simple - file names can’t have spaces.
                                                        2. File names get spaces, which implies parsing needs to evaluate quotes, but…
                                                        3. The user might have wanted a quote, so then there needs to be a mechanism to escape quotes and force them to be literal
                                                        4. Backslash was chosen as an escape character, despite also being a path separator, so it can’t universally be an escape without changing the format of all paths in the world, and is applied in a narrow case only.

                                                        That last one is sufficiently obscure that nobody really understands it and it just generates bugs. I spent a shocking amount of time trying to handle this in my shell. The problem is if a user has an argument of C:\Foo\, the child process gets C:\Foo\; if they have an argument of "C:\Foo\" then the child gets C:\Foo" because the backslash escapes the quote and is removed itself, and the argument doesn’t terminate. The user is supposed to know to say "C:\Foo\\", but only for child processes that implement these rules correctly.

                                                        Having backslash be an escape character only in one specific case means users don’t know about it, developers implement it inconsistently, and shell scripts become a nightmare because normalizing around the rules is almost impossible. I was using DOS/Windows command lines extensively for 30 years before really understanding this behavior.

                                                        1. 1

                                                          Your rough order of things sounds good, but it’s completely off. I’m sorry to inform you that the insane logic around backslash escapes in command line arguments was already in MS-DOS before filenames got spaces, even before Windows existed. Which actually makes sense, if you think about it, because it’s only in the MS-DOS environment that people still thought about command lines. The majority of Windows programs don’t even examine their command line, and early Windows programs that did make use of command line arguments would only accept exactly one argument (that being a filename to open at startup) – understandable because WinMain would just pass you an unparsed command line.

                                                          I don’t know when the weird parsing rules were added to MS-DOS. Given that MS-DOS 1.x didn’t have subdirectories, it seems plausible that they could have standardized the backslash as command-line escape character then, unwittingly setting up for a future conflict that was hastily worked around with the byzantine rules that we have now when backslashes suddenly became commonplace characters. But that’s just a guess.

                                                          Edit: Your “30 years” comment made me realize that it was exactly 30 years ago that I first found the official MS-DOS documentation for backslash escapes. I remember very clearly attempting to commit them to memory, with mixed results.

                                                          1. 2

                                                            it’s completely off

                                                            I stand corrected.

                                                            Which actually makes sense, if you think about it

                                                            On reflection, I agree, but for a different reason: I was assuming the reason is file names. But an argument can be anything, and once some program wants to get a single argv component that contains a space, it’d have the same implications. It inverts the order a bit though, because it suggests that quotes were used for file names because they were already used for argv, but that inherited the backslash-escape behavior which doesn’t play nicely with backslash delimited file names. So (as you suggest) the next obvious question is “why use backslash for an escape?”

                                                            I don’t know when the weird parsing rules were added to MS-DOS.

                                                            Well, it’s not really MS-DOS, it’s the toolchain that parses argv in the child process (ie., the C library.) I spent some time poking around pcjs.org - the source (including this quirk) in the C startup code is in C 4.0, and prior to that source was not distributed. Poking the compiler shows the binary exhibits the behavior in C 3.0. Prior to that, all of the argument parsing in the tools looks entirely custom and doesn’t handle quotes. So my guess is somewhere around 1984-85, but that’s well after DOS 2.0. Also, I looked at DOS 2.0 itself, and it’s entirely assembly: all command line parsing in external tools is done by hand, very minimally, without regard to things like quotes or escapes.

                                                            Had my theory about file names been right, I would have expected this later (C 5.1 or 6) because those supported OS/2 and it had long file names; but the argv parsing code would find its way into DOS anyway.

                                                            1. 1

                                                              Well, it’s not really MS-DOS, it’s the toolchain that parses argv in the child process (ie., the C library.)

                                                              Yes, that’s a good point. The documentation I vaguely remember reading was probably for the MS C compiler, then. I imagine that the backslash was therefore chosen because of its well-established status as the escape character in C.

                                                        1. 2

                                                          ISTG command line parsing is one of the things that convinced me to ditch Windows for Unix.

                                                          1. 6

                                                            Nice! One note, though, is that in its current form it doesn’t match the standard rules of Mastermind scoring (which Wordle also follows) – specifically the requirement that each element can only match once. Thus for example if the answer is chdir and I guess cacos, then the first “c” should be marked green, but the second “c” must go unmarked. (On the other hand, if the answer is rmdir and I guess errno, then both “r”s should be marked yellow.)

                                                            1. 2

                                                              Yeah I think I failed the first challenge because of this… I got pretty close but I thought there were 2 e’s because of the way it’s scored. But there is only 1 e.

                                                              1. 1

                                                                Okay, thanks, now I don’t have to comment about failing because two e’s.

                                                                1. 2

                                                                  I did have a TODO in the code, but I’ve finally fixed it. Sorry!

                                                            1. 4

                                                              My main takeaway from this: Programmers still won’t write documentation.

                                                              1. 4

                                                                Shocker, right? :D But there are always some people like Viktor who care about the docs and work to restore the balance in the Source.

                                                                1. 1

                                                                  Yes, although I would replace “always” with “sometimes”. I’ve dealt with many counterexamples.

                                                                  1. 1

                                                                    Fair enough.

                                                              1. 9

                                                                I’d say that it was a little excessive to say that the BMP file format “didn’t make it” when it was the only raster image format that the Windows OS natively supported for years, and is still supported by Firefox and Chrome out of the box. The WMF file format would have been a more interesting item to use there instead.

                                                                1. 5

                                                                  Most screenshots I get from customers at $work are in .BMP. Still very much out there.

                                                                  1. 2

                                                                    You’re lucky! I’m still amazed at the number of people who send screenshots in Word documents or even Excel workbooks.

                                                                    1. 2

                                                                      I get those, sometimes also converted to a PDF to be more professional.

                                                                  2. 4

                                                                    It’s just a terrible headline.

                                                                    The article lists formats that were (and are) in widespread use, where most people reading the article probably interacted with more than half. That’s fairly successful.

                                                                    The real point is that we have faster CPUs than in the 80s, with more ability to compress; we have more bits per pixel and more pixels which increases the benefit of compression; and we moved to networks that are frequently bandwidth constrained. Hence, formats today are more compressed than formats in the late 80s/early 90s. That doesn’t mean they failed, it just means tradeoffs change.

                                                                    1. 1

                                                                      Yeah. Three I had never heard of, two I knew by name and 5 have used from sometimes to often.

                                                                      But if the definition is “it’s not PNG, JPEG, GIF, or SVG”, then yes, they didn’t make it.

                                                                    2. 3

                                                                      Same goes for IFF ILBM on the Amiga; it was the only picture format for graphics of 256 colours or less, making it the universal format.

                                                                      For that matter, TIFF was still the only way we handled photos when I worked in publishing; it can handle CMYK and its only real contender is Photoshop’s internal format.

                                                                      1. 4

                                                                        Yeah but I also think it’s fair to say that IFF ILBM ‘didn’t make it’. Sure, it was the lingua franca for images at the time but it only ever truly took off on the Amiga, and although those of us who were Of the Tribe may feel like The One True Platform was the most important thing EVER in the HISTORY COMPUTING, if we’re honest with ourselves - it wasn’t :)

                                                                        #include <intuition.h> FOR-EVER! :)

                                                                        1. 3

                                                                          RIFF, a little-endian variant of IFF, lives on in WAV, AVI, WEBP, and other formats.

                                                                          1. 2

                                                                            Well, at least it was included, unlike XPM/PPM or Degas Elite, but I still think this was mostly a tendentious listicle.

                                                                            1. 3

                                                                              Sure I mean the whole idea is an exercise in futility. There are always going to be unhappy NERRRDS grousing about how their Xerox INTERLISP 4 bit image format got left out :)

                                                                            2. 1

                                                                              those of us who were Of the Tribe may feel like The One True Platform was the most important thing EVER in the HISTORY COMPUTING

                                                                              …I’m in this picture and I don’t like it.

                                                                              1. 2

                                                                                I get it but I also think everyone is young once and sadly a necessary aspect of that and the concomitant dearth of life experience that helps you scope your opinions against commonly perceived reality means we all get a pass and rightly deserve it :)

                                                                                It’s the folks who NEVER grow out of this that are sadly crippled and deserve our pity, and maybe where appropriate our help.

                                                                            3. 3

                                                                              3.x’s Datatypes were so awesome, though.

                                                                              Although I remember setting a JPEG as my Workbench backdrop and it would take several minutes before it would display after startup on my 1200, until I downloaded a 68020-optimized (maybe 030/FPU optimized? I got an upgrade at some point) data type from Aminet and it would display after only a second or so.

                                                                              Good times.

                                                                              1. 2

                                                                                Or convert to ILBM. You only need to do it once, then it loads instantly :)

                                                                                1. 3

                                                                                  I mean yes but Datatypes were so much cooler. Also I think I had a good reason at the time, but I don’t remember what.

                                                                                  I do remember downloading an MP3 (Verve Pipe’s “The Freshmen”) and trying to play it on my 1200. The machine simply was not fast enough to decode the MP3 in real time, so I converted it to uncompressed 8SVX…it played just fine but took up an enormous portion of my 40MB hard drive.

                                                                                  1. 1

                                                                                    With accelerator boards, mp3 (and vorbis, and opus) are feasible. From a quick search, apparently a 68040/40MHz will handle mp3@320.

                                                                                    And, back then, mp2 was still popular and used less resources.

                                                                              2. 2

                                                                                I have a vague feeling that .AVI, which was for a while extremely prevalent container for video content, is some derivation from IFF (but not perhaps ILBM).

                                                                              3. 3

                                                                                WMF is a walking security vulnerability. One of the opcodes is literally “execute the code at this offset in the file” and there were dozens of parsing vulns on top of that.

                                                                              1. 9

                                                                                Dare I say and ask: How many spaces is a tab? GCC seems to say “8”, for some reason. [emphasis mine]

                                                                                It never fails to astound me how a once-universally understood truth was basically erased within a few scant years. GCC has tabs as being eight spaces because for decades that’s what they were. While it was possible for most terminals to redefine the tab width, the default was always eight, and people generally never changed the setting, because if they did other people’s text files wouldn’t display correctly.

                                                                                You’ll note that many early C programmers indented their code to eight spaces. Using tabs for the indent level saved precious disk space (and even parsing time, on larger projects). But when programmers started favoring 4-space indents instead, they didn’t change the tab settings on their terminals! They just used spaces to indent on the first level. (But they would often still use a tab to indent at the second level. Why waste disk space, after all.)

                                                                                But somewhere in the 1980s-1990s (in my personal experience, anyway), people started conflating indent level with tab width. I started meeting more and more programmers who did not understand that these were two completely unrelated concepts. Often they were using GUI editors, and I thought they were just confusing tab characters with the Tab key, but then they would complain about my code looking wrong in their editors, and I realized that their editors were actually linterpreting tab characters as having a width of four.

                                                                                I want to lay the blame for this at the feet of MSVC, but that’s probably just my bias. Likely there were a number of factors that led to this generational forgetting. I have grudgingly accepted that tabs are just cursed characters now, and stopped using them in my code. But I continue to be surprised by how few people even seem to know that this happened.

                                                                                1. 7

                                                                                  The tab character means ‘indent to the next tabulator’. It is entirely environment dependent where you place the tabulators. Tabulators predate computers, they were present on typewriters. A typewriter carriage is pulled to the left by a spring. When you reach the end of the line, you typically had to manually push the carriage back to the right after pressing the newline character. Teletypes added an extra control code for this (carriage return) so that the sequence of carriage return and line feed would move the carriage back to be ready to write to the leftmost column and advance to the next line. These are separate ASCII characters because they were required to be for controlling teletypes.

                                                                                  Tabs, in a typewriter, were usually implemented as sliders (tab stops) attached to the carriage. When you pressed the tab key, the carriage would be raised slightly and would slide (pulled by the spring) until it hit a tab stop. The same implementation was used by most teletypes.

                                                                                  Not all typewriters were fixed-width. It requires a more complex design, but the amount that the carriage is moved to the left when you released a key could be independent for each key, allowing proportional fonts to be written. Irrespective of whether a typewriter implemented this, the location of the tab stops were not normally required to be an integer multiple of the character width, they were typically analogue things (sliders that could be clamped to the carriage at any distance).

                                                                                  The idea that tabs are 8 number of spaces has been true for only a very small amount of time and space:

                                                                                  • Typewriters? Not true.
                                                                                  • Teletypes? Not always true.
                                                                                  • Early virtual teletypes (terminals)? Probably true (though I’ve not actually found any terminals that didn’t let you configure the tab width. 8 was the default but just because you never changed the default doesn’t mean that it couldn’t be changed).
                                                                                  • Later terminals? Configurable, POSIX provides tabs utility to control it.
                                                                                  • Typesetters (manual or computerised)? Not true.
                                                                                  • DTP / word processing software? Not true.
                                                                                  • Most code editors (vi or newer)? Not true.

                                                                                  Using tabs for indentation (i.e. the thing that the tab character was invented for, back when typewriters were new and exciting) makes it possible for the reader to control the indent width. Using a mixture of tabs and spaces makes this hard. As of the most recent versions, clang-format now supports an indent mode where tabs are used for indentation, spaces for alignment, so the code is displayed correctly irrespective of the consumer’s tab width.