1.  

    Nodejs compatibility sounds like a terrible idea. Why develop for deno at all in that case?

    1.  

      Consider python 2 and 3 - compatibility tools like “six” are useful in the interim as the community transitions between incompatible languages/runtimes, even if in the final state they aren’t particularly desirable.

    1. 5

      Meta: man do I wish we could filter host URLs as well as tags. JWZ hasn’t written anything I’ve wanted to read in 15 years.

      1. 1

        You might be able to write an adblocker rule to hide them.

      1. 2

        These are all the same arguments. Some of us want to be able to swap out the battery in our phone, some of us would rather it be waterproof.

        Right-to-repair legislation should it exist needs to be about DRM that exists for breaking compatibility rather than protecting media, and parts availability. Not about what features you can or can not put in a phone you’re making.

        1. 5

          This reads like an obit. I wonder what the real story is, and hope Igor’s OK.

          1. 5

            From the article:

            So it is with sadness, but also gratitude, that we announce today Igor has chosen to step back from NGINX and F5 in order to spend more time with his friends and family and to pursue personal projects

            1. 6

              I just never think this is the real story with such a wistful tone.

              1. 3

                Am I the only one who thinks nobody has ever quit to “spend more time with family”? It’s such a cliche press release these days.

                I have become cynical in my old age.

                1. 7

                  It does happen for real in some cases. I had a coworker who quit because his son was diagnosed with a terminal disease and he wanted to be there for as much of the boy’s remaining time as he could; this wasn’t widely known outside his immediate circle of colleagues and the official line was the “spend more time with family” one, which in his case was completely true.

                  That said, I too am cynical and I suspect that most of the time when it’s true at all, it’s less like, “I want to teach my kid to play baseball” and more like, “My spouse got fed up with me always having work on my mind and told me I could keep my job or my marriage but not both.”

                  1. 2

                    I guess I deserve that for the way I phrased it. I mainly meant nobody whose resignation gets a press release is actually quitting for that reason. They’re usually not really even quitting.

                  2. 2

                    My coworker left Google to join my company because he noticed he was not spending enough time with his son. He has worked at my company for I think 8 years now and says he does not regret his decision. Better WLB here.

                    I remember seeing a post on HN by a father where he said that his (Kindergarten or 1st grade) son was supposed to draw a picture of his family at school, and he left out the father. Because they weren’t spending any time together. Dad was always at work.

                2. 1

                  How does it read like an obit? The first sentence says:

                  With profound appreciation and gratitude, we announce today that Igor Sysoev – author of NGINX and co‑founder of NGINX, Inc. – has chosen to step back from NGINX and F5 in order to spend more time with his friends and family and to pursue personal projects.

                  Obits don’t generally suggest that someone has “chosen to step back”. The only way it’d be accurate to say so would be in the case of suicide, and I would find such phrasing to be in spectacularly poor taste in that case.

                  So what, after that first sentence, made it read like an obituary for you?

                  1. 2

                    By the way that the entire thing is just like an obituary apart from the part you quoted.

                    1. 1

                      I suppose I just disagree. The fourth paragraph talks about Igor in the present tense:

                      That Igor is held in such high esteem by community and developers, enterprise customers, and NGINX engineers is a testament to his leadership by example with humility, curiosity, and an insistence on making great software

                      And the final paragraph wishes Igor “thank you so much for all the years working with us and we wish you the very best in your next chapter.”

                      These aren’t things I expect to see in an obituary. I guess the fact that the word “legacy” is used makes me think a bit that direction, but not the rest of it.

                    2. 2

                      The first sentence says: […]

                      That first sentence was moved at the beginning after they got that feedback. It wasn’t there when the article was posted on Lobsters.

                      Edit: I still agree with you, though.

                      1. 2

                        Oh. Well there it is, then. I guess you just looked sooner than I did, @Student.

                  1. 4

                    I think the single-image is the biggest thing holding Smalltalk and co back. Hear me out – I don’t think sticking everything in text files and using that as the glue for everything like we do now is the answer either. I think there should be multiple images, and the communication protocols between them should be well-documented. There’s no reason for the running application and the (source + tools) to live in the same image or vm. Nor the current developer’s instance of the source and tooling. We should be able to pause and snapshot a running program for later debugging without that VM needing the developer tools to be in-process. Deployment would turn into “pause, disconnect the dev image, zip” rather than “painstakingly prune parts we think we don’t need at runtime without blowing up the tools we’re using to do this process”.

                    This would allow the images to be different, and support different operations. We do want a forever history of the source code and project-specific tooling, we probably don’t want that for every running instance of our software. We might want the code running within the live application image to be using a different compilation strategy, and we might want the option to jump into an application image from 6 months ago using the newest set of dev tools that we’ve been working on for the last 6 months. Or you’re like me and can’t stand Morphic and its descendants, but would still like to be able to add/replace methods and inspect data with all the powers you expect from Smalltalk.

                    There’s no need to give up the dynamism of a running smalltalk system, but unless you need it for business logic purposes, a running application VM should be worried about what is rather than what was, and this probably opens up a lot of new avenues where the different VM aspects might optimise for different use cases.

                    1. 2

                      I think you’re right but also you risk losing out on a great strength- arguably the great strength- that this is a fully reprogrammable user interface. It’s much easier to see how you would incorporate something like the user’s choice of contextual help app that works across applications in something like a smalltalk environment.

                      1. 2

                        This is similar to what we were trying to build with Étoilé. We had a persistence framework called CoreObject that provided a data model with diffing and merging, which we could use for things like unlimited persistent branching undo and remote collaboration. We built some GUI abstractions where the view hierarchy both exposed and consumed the CoreObject data model and so we could bind models to views but could also bind views to views if you wanted to inspect the view hierarchy and could persist the views in the same way that you’d persist the object. With Pragmatic Smalltalk I wanted to encode the AST in the same object model but still use existing native libraries.

                      1. 5

                        It’s also very nice that CodeMirror 6 was developed from the ground up with TypeScript, so it’s easier to contribute to, and easier to fix any potential issues with API additions / improvements.

                        1. 1

                          I didn’t know that, but am very glad to hear it, chur. I’m gonna go lookit.

                        1. 57

                          The developer of these libraries intentionally introduced an infinite loop that bricked thousands of projects that depend on ’colors and ‘faker’.

                          I wonder if the person who wrote this actually knows what “bricked” means.

                          But beyond the problem of not understanding the difference between “bricked” and “broke”, this action did not break any builds that were set up responsibly; only builds which tell the system “just give me whatever version you feel like regardless of whether it works” which like … yeah, of course things are going to break if you do that! No one should be surprised.

                          Edit: for those who are not native English speakers, “bricked” refers to a change (usually in firmware on an embedded device) which not only causes the device to be non-functional, but also breaks whatever update mechanisms you would use to get it back into a good state. It means the device is completely destroyed and must be replaced since it cannot be used as anything but a brick.

                          GitHub has reportedly suspended the developer’s account

                          Hopefully this serves as a wakeup call for people about what a tremendously bad idea it is to have all your code hosted by a single company. Better late than never.

                          1. 25

                            There have been plenty of wakeup calls for people using Github, and I doubt one additional one will change the minds of very many people (which doesn’t make it any less of a good idea for people to make their code hosting infrastructure independent from Github). The developer was absolutely trolling (in the best sense of the word) and a lot of people have made it cleared that they’re very eager for Github to deplatform trolls.

                            I don’t blame him certainly; he’s entitled to do whatever he wants with the free software he releases, including trolling by releasing deliberately broken commits in order to express his displeasure at companies using his software without compensating him in the way he would like.

                            The right solution here is for any users of these packages to do exactly what the developer suggested and fork them without the broken commits. If npm (or cargo, or any other programming language ecosystem package manager) makes it difficult for downstream clients to perform that fork, this is an argument for changing npm in order to make that easier. Build additional functionality into npm to make it easier to switch away from broken or otherwise-unwanted specific versions of a package anywhere in your project’s dependency tree, without having to coordinate this with other package maintainers.

                            1. 31

                              The developer was absolutely trolling (in the best sense of the word)

                              To the extent there is any good trolling, it consists of saying tongue-in-cheek things to trigger people with overly rigid ideas. Breaking stuff belonging to people who trusted you is not good in any way.

                              I don’t blame him certainly; he’s entitled to do whatever he wants with the free software he releases, including trolling by releasing deliberately broken commits in order

                              And GitHub was free to dump his account for his egregious bad citizenship. I’m glad they did, because this kind of behavior undermines the kind of collaborative trust that makes open source work.

                              to express his displeasure at companies using his software without compensating him in the way he would like.

                              Take it from me: the way to get companies to compensate you “in six figures” for your code is to release your code commercially, not open source. Or to be employed by said companies. Working on free software and then whining that companies use it for free is dumbshittery of an advanced level.

                              1. 32

                                No I think the greater fool is the one who can’t tolerate changes like this in free software.

                                1. 1

                                  It’s not foolish to trust, initially. What’s foolish is to keep trusting after you’ve been screwed. (That’s the lesson of the Prisoner’s Dilemma.)

                                  A likely lesson companies will draw from this is that free software is a risk, and that if you do use it, stick to big-name reputable projects that aren’t built on a house of cards of tiny libraries by unknown people. That’s rather bad news for ecosystems like node or RubyGems or whatever.

                                2. 12

                                  Working on free software and then whining that companies use it for free is dumbshittery of an advanced level.

                                  Thankyou. This is the point everybody seems to be missing.

                                  1. 49

                                    The author of these libraries stopped whining and took action.

                                    1. 3

                                      Worked out a treat, too.

                                      1. 5

                                        I mean, it did. Hopefully companies will start moving to software stacks where people are paid for their effort and time.

                                        1. 6

                                          He also set fire to the building making bombs at home, maybe he’s not a great model.

                                          1. 3

                                            Not if you’re being responsible and pinning your deps though?

                                            Even if that weren’t true though, the maintainer doesn’t have any obligation to companies using their software. If the company used the software without acquiring a support contract, then that’s just a risk of business that the company should have understood. If they didn’t, that’s their fault, not the maintainer’s - companies successfully do this kind of risk/reward calculus all the time in other areas, successfully.

                                            1. 1

                                              I know there are news reports of a person with the same name being taken into custody in 2020 where components that could be used for making bombs were found, but as far as I know, no property damage occurred then. Have there been later reports?

                                            2. 3

                                              Yeah, like proprietary or in-house software. Great result for open source.

                                              Really, if I were a suit at a company and learned that my product was DoS’d by source code we got from some random QAnon nutjob – that this rando had the ability to push malware into his Git repo and we’d automatically download and run it – I’d be asking hard questions about why my company uses free code it just picked up off the sidewalk, instead of paying a summer intern a few hundred bucks to write an equivalent library to printf ANSI escape sequences or whatever.

                                              That’s inflammatory language, not exactly my viewpoint but I’m channeling the kind of thing I’d expect a high-up suit to say.

                                    2. 4

                                      There have been plenty of wakeup calls for people using Github, and I doubt one additional one will change the minds of very many people

                                      Each new incident is another feather. For some, it’s the last one to break the camel’s back.

                                      1. 4

                                        in order to express his displeasure at companies using his software without compensating him in the way he would like.

                                        This sense of entitlement is amusing. This people totally miss the point of free software. They make something that many people find useful and use (Very much thanks to the nature of being released with a free license, mind you), then they feel in their right to some sort of material/monetary compensatiom.

                                        This is not miss universe contest. It’s not too hard to understand that had this project been non free, it would have probably not gotten anywhere. This is the negative side of GitHub. GitHub has been an enormously valuable resource for free software. Unfortunately, when it grows so big, it will inevitably also attract this kind of people that only like the free aspect of free software when it benefits them directly.

                                        1. 28

                                          This people totally miss the point of free software.

                                          An uncanny number of companies (and people employed by said companies) also totally miss the point of free software. They show up in bug trackers all entitled like the license they praise in all their “empowering the community” slides doesn’t say THE SOFTWARE IS PROVIDED “AS IS” in all fscking caps. If you made a list of all the companies to whom the description “companies that only like the free aspect of free software when it benefits them directly” doesn’t apply, you could apply a moderately efficient compression algorithm and it would fit in a boot sector.

                                          I don’t want to defend what the author did – as someone else put it here, it’s dumbshittery of an advanced level. But if entitlement were to earn you an iron “I’m an asshole” pin, we’d have to mine so much iron ore on account of the software industry that we’d trigger a second Iron Age.

                                          This isn’t only on the author, it’s what happens when corporate entitlement meets open source entitlement. All the entitled parties in this drama got exactly what they deserved IMHO.

                                          Now, one might argue that what this person did affected not just all those entitled product managers who had some tough explaining to do to their suit-wearing bros, but also a bunch of good FOSS “citizens”, too. That’s absolutely right, but while this may have been unprofessional, the burden of embarrassment should be equally shared by the people who took a bunch of code developed by an independent, unpaid developer, in their spare time – in other words, a hobby project – without any warranty, and then baked it in their super professional codebases without any contingency plan for “what if all that stuff written in all caps happens?”. This happened to be intentional but a re-enactment of this drama is just one half-drunk evening hacking session away.

                                          It’s not like they haven’t been warned – when a new dependency is proposed, that part is literally the first one that’s read, and it’s reviewed by a legal team whose payment figures are eye-watering. You can’t build a product based only on the good parts of FOSS. Exploiting FOSS software only when it benefits yourself may also be assholery of an advanced level, but hoping that playing your part shields you from all the bad parts of FOSS is naivety of an advanced level, and commercial software development tends to punish that.

                                          1. 4

                                            They show up in bug trackers all entitled like the license they praise in all their “empowering the community” slides doesn’t say THE SOFTWARE IS PROVIDED “AS IS” in all fscking caps

                                            Slides about F/OSS don’t say that because expensive proprietary software has exactly the same disclaimer. You may have an SLA that requires bugs to be fixed within a certain timeframe, but outside of very specialised markets you’ll be very hard pressed to find any software that comes with any kind of liability for damage caused by bugs.

                                            1. 1

                                              Well… I meant the license, not the slides :-P. Indeed, commercial licenses say pretty much the same thing. However, at least in my experience, the presence of that disclaimer is not quite as obvious with commercial software – barring, erm, certain niches.

                                              Your average commercial license doesn’t require proprietary vendors to issue refunds, provide urgent bugfixes or stick by their announced deadlines for fixes and veatures. But the practical constraints of staying in business are pretty good at compelling them to do some of these things.

                                              I’ve worked both with and without SLAs so I don’t want to sing praises to commercial vendors – some of them fail miserably, and I’ve seen countless open source projects that fix security issues in less time than it takes even competent large vendors to call a meeting to decide a release schedule for the fix. But expecting the same kind of commitment and approachability from Random J. Hacker is just not a very good idea. Discounting pathological arseholes and know-it-alls, there are perfectly human and understandable reasons why the baseline of what you get is just not the same when you’re getting it from a development team with a day job, a bus factor of 1, and who may have had a bad day and has no job description that says “be nice to customers even if you had a bad day or else”.

                                              The universe npm has spawned is particularly susceptible to this. It’s a universe where adding a PNG to JPG conversion function pulls fourty dependencies, two of which are different and slightly incompatible libraries which handle emojis just in case someone decided to be cute with file names, and they’re going to get pulled even if the first thing your application does is throw non-alphanumeric characters out of any string, because they’re nth order dependencies with no config overrides. There’s a good chance that no matter what your app does, 10% of your dependencies are one-person resume-padding efforts that turned out to be unexpectedly useful and are now being half-heartedly maintained largely because you never know when you’ll have to show someone you’re a JavaScript ninja guru in this economy. These packages may well have the same “no warranty” sticker that large commercial vendors put on theirs, but the practical consequences of having that sticker on the box often differ a lot.

                                              Edit: to be clear, I’m not trying to say “proprietary – good and reliable, F/OSS – slow and clunky”, we all know a lot of exceptions to both. What I meant to point out is that the typical norms of business-to-business relations just don’t uniformly apply to independent F/OSS devs, which makes the “no warranty” part of the license feel more… intense, I guess.

                                          2. 12

                                            The entitlement sentiment goes both ways. Companies that expect free code and get upset if the maintainer breaks backward compatibility. Since when is that an obligation to behave responsibly?

                                            When open source started, there wasn’t that much money involved and things were very much in the academic spirit of sharing knowledge. That created a trove of wealth that companies are just happy to plunder now.

                                          3. 1

                                            releasing deliberately broken commits in order to express his displeasure at companies using his software without compensating him in the way he would like.

                                            Was that honestly the intent? Because in that case: what hubris! These libraries were existing libraries translated to JS. He didn’t do any of the hard work.

                                          4. 8

                                            There is further variation on the “bricked” term, at least in the Android hacker’s community. You might hear things like “soft bricked” which refers to a device that has the normal installation / update method not working, but could be recovered through additional tools, or perhaps using JTAG to reprogram the bootloader.

                                            There is also “hard bricked” which indicates something completely irreversible, such as changing the fuse programming so that it won’t boot from eMMC anymore. Or deleting necessary keys from the secure storage.

                                            1. 3

                                              this action did not break any builds that were set up responsibly; only builds which tell the system “just give me whatever version you feel like regardless of whether it works” which like … yeah, of course things are going to break if you do that! No one should be surprised.

                                              OK, so, what’s a build set up responsibly?

                                              I’m not sure what the expectations are for packages on NPM, but the changes in that colors library were published with an increment only to the patch version. When trusting the developers (and if you don’t, why would you use their library?), not setting in stone the patch version in your dependencies doesn’t seem like a bad idea.

                                              1. 26

                                                When trusting the developers (and if you don’t, why would you use their library?), not setting in stone the patch version in your dependencies doesn’t seem like a bad idea.

                                                No, it is a bad idea. Even if the developer isn’t actively malicious, they might’ve broken something in a minor update. You shouldn’t ever blindly update a dependency without testing afterwards.

                                                1. 26

                                                  Commit package-lock.json like all of the documentation tells you to, and don’t auto-update dependencies without running CI.

                                                  1. 3

                                                    And use npm shrinkwrap if you’re distributing apps and not libraries, so the lockfile makes it into the registry package.

                                                  2. 18

                                                    Do you really think that a random developer, however well intentioned, is really capable of evaluating whether or not any given change they make will have any behavior-observable impact on downstream projects they’re not even aware of, let alone have seen the source for and have any idea how it consumes their project?

                                                    I catch observable breakage coming from “patch” revisions easily a half dozen times a year or more. All of it accidental “oh we didn’t think about that use-case, we don’t consume it like that” type stuff. It’s truly impossible to avoid for anything but the absolute tiniest of API surface areas.

                                                    The only sane thing to do is to use whatever your tooling’s equivalent of a lock file is to strictly maintain the precise versions used for production deploys, and only commit changes to that lock file after a full re-run of the test suite against the new library version, patch or not (and running your eyeballs over a diff against the previous version of its code would be wise, as well).

                                                    It’s wild to me that anyone would just let their CI slip version updates into a deploy willynilly.

                                                    1. 11

                                                      This neatly shows why Semver is a broken religion: you can’t just rely on a version number to consider changes to be non-broken. A new version is a new version and must be tested without any assumptions.

                                                      To clarify, I’m not against specifying dependencies to automatically update to new versions per se, as long as there’s a CI step to build and test the whole thing before it goes it production, to give you a chance to pin the broken dependency to a last-known-good version.

                                                      1. 7

                                                        Semver doesn’t guarantee anything though and doesn’t promise anything. It’s more of an indicator of what to expect. Sure, you should test new versions without any assumptions, but that doesn’t say anything about semver. What that versioning scheme allows you to do though is put minor/revision updates straight into ci and an automatic PR, while blocking major ones until manual action.

                                                      2. 6

                                                        The general form of the solution is this:

                                                        1. Download whatever source code you are using into a secure versioned repository that you control.

                                                        2. Test every version that you consider using for function before you commit to it in production/deployment/distribution.

                                                        3. Build your system from specific versions, not from ‘last update’.

                                                        4. Keep up to date on change logs, security lists, bug trackers, and whatever else is relevant.

                                                        5. Know what your back-out procedure is.

                                                        These steps apply to all upstream sources: language modules, libraries, OS packages… dependency management is crucial.

                                                        1. 3

                                                          Amazon does this. Almost no-one else does this, but that’s a choice with benefits (saving the set up effort mostly) and consequences (all of this here)

                                                        2. 6

                                                          When trusting the developers (and if you don’t, why would you use their library?)

                                                          If you trust the developers, why not give them root on your laptop? After all, you’re using their library so you must trust them, right?

                                                          1. 7

                                                            There’s levels to trust.

                                                            I can believe you’rea good person by reading your public posts online, but I’m not letting you babysit my kids.

                                                        3. 2

                                                          Why wouldn’t this behavior be banned by any company?

                                                          1. 2

                                                            How do they ban them, they’re not paying them? Unless you mean the people who did not pin the dependencies?

                                                            1. 4

                                                              I think it is bannable on any platform, because it is malicious behavior - that means he intentionally caused harm to people. It’s not about an exchange of money, it’s about intentional malice.

                                                            2. 1

                                                              Because it’s his code and even the license says “no guarantees” ?

                                                              1. 2

                                                                The behavior was intentionally malicious. It’s not about violating a contract or guarantee. For example, if he just decided that he was being taken advantage of and removed the code, I don’t think that would require a ban. But he didn’t do that - he added an infinite loop to purposefully waste people’s time. That is intentional harm, that’s not just providing a library of poor quality with no guarantee.

                                                                Beyond that, if that loop went unnoticed on a build server and costed the company money, I think he should be legally responsible for those damages.

                                                          1. 7

                                                            Excellent stuff, I know they’ve been working on this for a long time and it must be a huge moment for them. Party time!

                                                            1. 8

                                                              I fear the same thing happening in the Rust ecosystem some day, because you quickly pull in a non-trivial number of small libs even for a simple project, and most of them are hosted on GitHub and maintained often by separate individuals or small groups. I know there is version pinning, but how well is this followed? This might be a good idea for an explorative analysis, and maybe someone can say more about this concern, which probably has been addressed in some way.

                                                              As a side note, many think the circumstances of Aaron Swartz’ death raise a lot of questions, to put it lightly.

                                                              1. 19

                                                                As a side note, many think the circumstances of Aaron Swartz’ death raise a lot of questions, to put it lightly.

                                                                I mean, yeah, sure, but I’m having a hard time seeing how that is at all related to this situation.

                                                                1. 2

                                                                  It’s supposedly an integral part of the rantings of the developer in question.

                                                                  1. 3

                                                                    Is there more background? I saw the hashtag in a tweet from Marak and was completely confused.

                                                                2. 7

                                                                  and most of them are hosted on GitHub

                                                                  The rust ecosystem has a tool enforced rule that for all crates hosted by crates.io, all rust dependencies are also hosted by crates.io. Technically you can have a build.rs that manually downloads things from github (or elsewhere), but that is incredibly rare.

                                                                  I know there is version pinning, but how well is this followed

                                                                  Every time you run a build, it generates a Cargo.lock file pinning the version. Most people developing binaries commit this, that’s what the default .gitignore encourages, so it’s followed pretty well.

                                                                  1. 3

                                                                    The lock file also includes checksums for each dependency and crates.io doesn’t allow re-pushing the same version.

                                                                  2. 5

                                                                    Dependencies update when you run cargo update, and not automatically. It’s not foolproof: if you don’t preserve Cargo.lock and use CI/Docker that naively wipes all state away, you’re going to get fresh(est) deps every time.

                                                                    crates.io is lucky that it’s a step behind npm, so it can learn from npm’s mistakes. For example, from the start it prevented left-pad incidents by not deleting any packages (yanking only hides packages from UI, not direct downloads).

                                                                    But it’s only a matter of time before the next big mess happens on crates.io too. Sadly, crates.io is understaffed, so there are several features/mitigations it probably should have, but hasn’t got yet.

                                                                    There’s cargo-crev that aims to create a set of manually-verified trustworthy packages, but crates.io is growing by 70 releases a day, so it’s hard to keep up.

                                                                    1. 1

                                                                      I think the feature is inherent. If you’re downloading stuff in apt, you’re trusting the people packaging that application (and their download servers). If you’re installing stuff manually, you’re trusting the people where you download it. And if you’re developing things you’ll have to trust the developers. And I’m certain at each of these steps are many “hidden dependencies” you don’t actually realize when downloading. It will fail somewhere in the future, and people will rally about how bad crates.io is, while the rest of the world moves on.

                                                                    2. 1

                                                                      Automatic generated for all builds, should be committed for all applications:

                                                                      # This file is automatically @generated by Cargo.
                                                                      # It is not intended for manual editing.
                                                                      version = 3
                                                                      
                                                                      [[package]]
                                                                      name = "actix-codec"
                                                                      version = "0.4.1"
                                                                      source = "registry+https://github.com/rust-lang/crates.io-index"
                                                                      checksum = "13895df506faee81e423febbae3a33b27fca71831b96bb3d60adf16ebcfea952"
                                                                      [...]
                                                                      
                                                                    1. 3

                                                                      This is both a marvelous write-up and a reminder as to why I am near exclusively a console gamer these days.

                                                                      1. 4

                                                                        It’s funny that game consoles become more and more like pcs with a single fixed configuration. Almost like apple hardware.

                                                                        1. -1

                                                                          Not really, it’s the fixed configuration (and appliance-like customer lockout) that hass the value, not having a weird low-numbers CPU.

                                                                          1. 5

                                                                            it’s the fixed configuration (and appliance-like customer lockout) that hass the value

                                                                            I mean, you literally just described game consoles and apple hardware, so I’m not sure what your point is?

                                                                            1. 3

                                                                              I think his point is that, probably until about 10 years ago, consoles shipped differentiating hardware. You could do things on an 8-bit Nintendo that you couldn’t do on a commodity general-purpose computer at the time, even though the PC was more expensive. When 3D acceleration started to be the norm (around the PS1 / N64 era), consoles had very exciting accelerator designs to get the best possible performance within a price envelope. Most PCs didn’t have 3D accelerators at all and when they did they were either slower than the ones in consoles or a lot more expensive.

                                                                              Over time, the economies of scale for CPUs and GPUs have meant that a mostly commodity CPU and GPU are faster than anything custom that you could build for the same price. Consoles typically have custom SoCs still (which makes economic sense because they’re selling a large number of exactly the same chip), but most of the IP cores on them are off-the-shelf components. They even run commodity operating systems (Windows on Xbox, FreeBSD on PS4), though tuned somewhat to the particular use case.

                                                                              It’s unlikely that a future console will have much custom hardware unless it is doing something very new and exciting. HoloLens, for (a non-console) example, has some custom chips because off-the-shelf commodity hardware for AR doesn’t really exist and so a console wanting to do AR might get custom chips.

                                                                              Even in the classic Nintendo era, the value of consoles to developers was twofold:

                                                                              • They had hardware optimised for games.
                                                                              • Every single instance of the console had exactly the same hardware and so the testing margin was small.

                                                                              The first is now far less important than the second. This is somewhat true for the Apple hardware but the scales are different. The Xbox One, for example, came out in 2013. The Xbox One S was almost identical hardware, just cheaper. The Xbox One X wasn’t released until 2017 and was faster but any game written for the older hardware would run fine on it, so if you weren’t releasing a AAA game then you could just test on the cheaper ones. The Xbox Series X/S were released 7 years later. If it has a similar lifetime, that’s four devices to test on for 14 years. Apple generally releases at least 2-3 models of 4-5 different product lines every year.

                                                                              1. 1

                                                                                How is it funny? It’s sensible and predictable, and IMO kind of a bummer.

                                                                        1. 19

                                                                          For what it’s worth, I always thought this is a seriously underappreciated piece of work due to timing.

                                                                          Unlike its Windows counterpart, it’s a native BASIC compiler that generates OMF files and executes a native linker, and distributes a character-based windowing library. This makes it possible to mix-and-match with C or assembly. The UI library is impressive just due to how comprehensive it is - there were many versions of character based windowing toolkits, but this one allows a similar set of configurability on controls to the Windows version. If you ignore the UI library, it’s the final version of QuickBasic, and it compiles nibbles.bas to run at truly insane speed.

                                                                          It also shipped with a converter to transition projects to Windows, although that was a bit hit-and-miss, since the UI elements really have a “preferred” size on both platforms.

                                                                          To me, this was the QuickBasic for DOS team going out with a mic-drop.

                                                                          1. 8

                                                                            Couldn’t agree more. I used VBDOS a lot as a teenager, learning solely from the really-good included documentation. I used it to make a bunch of little form programs to generate files for video game configuration and batch files. It gave me immediate feedback in a way that was important to that stage of my learning, and was immediately applicable in a way that was important to that stage of my interest.

                                                                            There are so few places like this for kids right now. Hell, even as a professional today I wish it were still this easy to write little convenience stuff

                                                                            1. 3

                                                                              Is there any comparison out there of, say, a simple GUI CRUD program written in a variety of languages (with simple backends, e.g., SQLite or flat file)? I’m really curious how different popular modern languages and GUI toolkits compare in that sort of domain.

                                                                              1. 3

                                                                                Poorly, honestly. I was on the Delphi side, not the VB side, but it took well less than hour to put together a simple database CRUD app, complete with installer. (My understanding is that VB4 and onward, at a minimum, delivered similar speed for that kind of thing, so I don’t think that was unique to Delphi, but I just don’t feel comfy speaking to it.) The only web framework I’ve seen that can do that so quickly is Rails, but that can only do the client/server setup, whereas Delphi can trivially do local-only or client/server via BDE (and I believe VB could also do either, via Jet, but this is again not something I worked with).

                                                                                1. 2

                                                                                  It’s been ages since I did this, but there are three distinct bits to this problem:

                                                                                  • Building a UI.
                                                                                  • Defining the business logic.
                                                                                  • Integrating with a back end.

                                                                                  OpenStep + Enterprise Object Framework was probably the best I’ve ever seen at this. For the first part, NeXT’s Interface Builder did something that most tools that have copied it (including Delphi) missed: it was not a UI builder, it was a tool for creating a serialised object graph. Those objects included view objects, controllers, and connections for attaching models and so on to the rest of the system. With EOF (an ORM), those controllers could be connected directly to a database, which gave to the third part almost for free. Unfortunately, NeXT charged around $40K for a license to EOF (I think it was bundled with WebObjects, which was doing Ruby-on-Rails-like things in 2004. I don’t think Apple ever released the Objective-C version of EOF with Cocoa (the later versions of WebObjects and EOF from NeXT were Java and Apple supported them for a while). They provided a cut-down version as CoreData but it supports only local back ends. GNUstep provided an implementation EOF (GDL2) but never provided the GUI tools for working with it (and GORM is much worse than even an old version of NeXT’s Interface Builder) so it had very few users.

                                                                                  The modern equivalents are probably things like Power Apps, which use the Excel formula language for business logic and provide black-box connectors to various data sources and a GUI builder.

                                                                            2. 2

                                                                              I was a huge fan of vbdos. My only complaint was that the UI for the IDE was slow and the compiled code was by default much slower than what quickbasic produced. It may have been possible to tune (compiling in release mode or similar) but the IDE never guided me in that direction.

                                                                              As a middle schooler learning programming, I loved vbdos, but the performance pushed me to eventually learn Pascal and C (my computer did not have enough memory to run Turbo C++).

                                                                              1. 1

                                                                                It actually compiles to native code? VB for Windows uses a crappy VM for the first few versions IIRC.

                                                                                1. 2

                                                                                  I remember a lot of arguments about whether VB actually counted as a compiler in the ‘90s. For a high-level language, most of the operations are going to be implemented in high-level service routines in a support library, so there’s a bit of a spectrum:

                                                                                  • Interpreter that walks an AST (or even parses each statement) and calls the service routines.
                                                                                  • Compiler that generates bytecode, interpreter that dispatches to a service routine for each bytecode.
                                                                                  • Compiler that replaces each bytecode with a call to the corresponding service routine.
                                                                                  • Compiler that replaces calls to small service routines with inlined versions.

                                                                                  There is a big performance jump from the first to the second, a much smaller return for each subsequent one. A decent bytecode interpreter is easily a factor of 10 faster than a simple AST interpreter (ignoring things like Graal, that do a lot of optimisation on the AST as they run). If the bytecores are each rich operations then they can be individually optimised and the overhead of the bytecode dispatch is small. If your bytecode is ‘add two 32-bit integers’ then there’s a lot of dispatch overhead, but if it’s ‘draw an arc segment with this Bezier’ or ‘print this sequence of variables converting any non-string values to strings’ then the dispatch overhead is negligible. BASIC bytecodes are typically closer to the latter in most programs (and with VB it was pretty easy to offload any non-trivial calculation to another language so the performance for raw numerical compute didn’t matter to most people. Especially after VB4 when they replaced VBX with OCX and so you could import COM objects from C++ trivially).

                                                                              1. 1

                                                                                If you want a computer to last 50 years, it needs to be made of bits, not atoms. Change out the atoms as they wear out / become too much trouble.

                                                                                For most people, the answer is just probably Pharo / Squeak / something very similar.

                                                                                1. 6

                                                                                  Note to self: Never, ever, go into hardware unless as an employee.

                                                                                  1. 3

                                                                                    I think I agreed with every single one of these except the pro-mobbing take.

                                                                                    Sophisticated DSLs with special syntax are probably a dead-end. Ruby and Scala both leaned hard into this and neither got it to catch on.

                                                                                    As someone who’s worked with ruby more than 10 years, couldn’t agree more here. rspec is the conspicuous remnant of this delirium and is an unequivocal mistake.

                                                                                    1. 3

                                                                                      Anecdata: Every time I see something that completely abuses Javascript in a way that breaks catastrophically when you drift outside of “blog engine demo” territory, it’s always somehow descended from rspec and/or cucumber.

                                                                                      1. 2

                                                                                        As someone who’s worked with ruby more than 10 years, couldn’t agree more here. rspec is the conspicuous remnant of this delirium and is an unequivocal mistake.

                                                                                        Could you elaborate on this?

                                                                                        I haven’t used rspec, but I’ve used mocha in JS, which I think was inspired by rspec. In Mocha we write a lot of statements like expect(someValue).to.have.keys(['a', 'b']). I don’t love the Mocha syntax, but it does produce quite nice, human-readable error output. I guess it could be easily reduced to expect(someValue).hasKeys(['a', 'b']).

                                                                                        1. 4

                                                                                          Happy RSpec user here and definitely going to continue using it in future. Not sure why some people keep repeating that DSLs haven’t caught on, especially in Ruby. It’s the least convincing argument as to why it’s worse than something else. ActiveRecord, the ORM in Rails, is nothing but a DSL to model relationships and 1000s of companies use it to build successful businesses. The proof is in the pudding. Is it perfect, certainly not. Does it require reading (and potentially re-reading) the docs, sure does. Is there a learning curve to become proficient and does it require experience to know when to use what or stay away from it, most definitely like with all things high level.

                                                                                          RSpec is most likely the most successful DSL, at least judging by the download/deployment numbers, s. https://rubygems.org/gems/rspec (vs. https://rubygems.org/gems/minitest or https://rubygems.org/gems/activerecord for instance).

                                                                                          1. 4

                                                                                            The problem I’ve encountered in most of these DSLs (I played a bit with Mocha many years ago, but have the most experience with SBT/Scala, Chef, and bizzaro DSLs atop Tcl invented in hardware land) is a combination of:

                                                                                            1. Poor documentation for the not happy paths: The happy path is easy, but the moment you need to do something off the beaten path, you’ll find sharp edges in the DSL and lacking documentation. This also makes teaching other engineers about a DSL difficult. In my experience we taught SBT mostly by having experienced engineers pair with newer engineers to teach them about the DSL. This adds a learning overhead to DSLs that just isn’t there for general purpose programming languages.

                                                                                            2. Bad error messages: Again, most DSLs are optimized for the happy path. Many of these DSLs don’t really chain errors together very well. When you give these DSLs something they don’t expect, they rarely output any sensible error output to work with.

                                                                                            3. Few escape hatches: DSL authors (looking at you SBT) really like to, understandably, constrain what you can do in the DSL. That’s great until it’s not. Most DSLs don’t offer you a good way to break out of their assumptions and don’t give you a good way to interact with their cloistered world.

                                                                                            1. 2

                                                                                              I could write about this at length but I’ll try to be brief.

                                                                                              Rspec re-invents fundamental ruby language concepts for sharing and organizing code, with no benefit other than making your tests “read more English,” which can seem cool to beginners (and did to me at one point) but is purely cosmetic and superfluous. Examples of this language re-invention include shared_examples/it_behaves_like, let statements, and proliferating “helper” files.

                                                                                              To use rspec well, you need to learn a whole new language and set of best practices. And every new member of your team does too. I mean, there are whole books on it. Testing frameworks should be simple, not book-worthy. And there is nothing special about testing that warrants this. If you invest time becoming a ruby expert, you should be able to use your ruby expertise to write good tests. You should be able to use normal language constructs to share code in tests.

                                                                                              This is an old debate, and DHH was complaining about it years ago.

                                                                                              With that said, I don’t mind the expect DSL for assertions, and I like “it” blocks as a way of defining tests. But both of these, while technically DSLs, are small, focused, simple constructs that can be learned in five minutes and probably grokked without even reading the docs. Minitest is essentially just these parts of rspec, with the expect assertions optional, and that’s what I’d recommend for testing in ruby.

                                                                                              1. 1

                                                                                                Since “read like English” is the thing RSpec optimized for, it is incoherent to dismiss it. Programming languages should be easy to read and easy to write, but readability and writability is in tension. RSpec is the result of optimizing readability to extreme such that you don’t need to know RSpec to read it. Compare: you do need to know Ruby to read it.

                                                                                                When is “read like English” useful and not cosmetic? When it needs to be read by someone who don’t know how to program. If everyone reading your RSpec test knows how to program, sure, you probably shouldn’t use RSpec: as you said, it duplicates Ruby for not much benefit. The key idea is that learning RSpec when you already know Ruby is many times easier than learning Ruby when you don’t know how to program.

                                                                                                1. 3

                                                                                                  Since “read like English” is the thing RSpec optimized for, it is incoherent to dismiss it.

                                                                                                  It’s not incoherent in the least. It is, and always has been, a silly and misguided goal. I say this as someone who has read the rspec book, who knows it and the philosophy behind it well, and at one time believed the hype.

                                                                                                  Programming languages should be easy to read and easy to write, but readability and writability is in tension. RSpec is the result of optimizing readability to extreme such that you don’t need to know RSpec to read it. Compare: you do need to know Ruby to read it.

                                                                                                  I’m sorry, but this is pure nonsense. Rspec is not more readable than ruby, and the idea that non-programming “stakeholders” will be more likely to read and participate in the design process if you use rspec or cucumber is a pipe dream. I’ve never seen it happen, not once. And in the rare instance that one of these stakeholders was going to do this, they would find it no more difficult to understand minitest or test/unit. The difference in friction would be neglibigle, and the skill and clarity of the programmer writing the tests would be overwhelmingly more important than the test framework.

                                                                                                  1. 2

                                                                                                    The idea that non-programming “stakeholders” will be more likely to read and participate in the design process if you use rspec or cucumber is a pipe dream. I’ve never seen it happen, not once.

                                                                                                    I have seen it happen, but if you haven’t, I 100% agree RSpec has been entirely useless for you.

                                                                                                    1. 1

                                                                                                      https://www.stevenrbaker.com/tech/history-of-rspec.html gives a better description of the original goal.

                                                                                            1. 7

                                                                                              Any technical interview is going to be gameable and exclusionary to some underrepresented group/background. It’s very possible that our current dysfunctional interviewing practices are still, somehow, close to optimal.

                                                                                              Something can’t both be an “uncomfortable truth” and the status quo belief. I have a hard time reading this as anything other than an attempt to make a deferral of any responsibility to improve the situation sound like some sort of hard-won nugget of wisdom.

                                                                                              1. 8

                                                                                                As I understoof it, the uncomfortable truth Hwayne is claiming is that the status quo (which nobody really likes) may be close to optimal.

                                                                                                1. 2

                                                                                                  This isn’t a status quo belief; this is an area of active political debate. The correct way for institutions to react to various demographic groups being underrepresented (in all aspects of public life, not just programming job interviews) is one of the most contentious political issues in the anglophone world at the moment. People disagree vehemently about what changes would constitute “improving the situation”, in a way that affects (and should affect) which politicians they vote for or donate money to.

                                                                                                  1. 1

                                                                                                    I think this is selection bias, because it seems to me like the debate is occurring between a minority of people who actually care, and the remainder if pressed would basically shrug and reiterate exactly the “uncomfortable truth” stated here.

                                                                                                  2. 1

                                                                                                    Is the status-quo belief that it’s impossible to create a technical interview that is not “going to be gameable and exclusionary to some underrepresented group/background”?

                                                                                                    I suspect quite a few people running technical interviews aren’t thinking about this at all, or are making some effort to reduce obvious (minimum, lawsuitable?) biases, and then thinking that they’ve eliminated bias.

                                                                                                    I’m not sure I agree with the second part. I don’t see how you can say that current interview processes are both “dysfunctional” and “very possibly … still, somehow, close to optimal”.

                                                                                                  1. 2

                                                                                                    The ideal message delivery is a fiction, and building software that requires it is negligent.

                                                                                                    1. 1

                                                                                                      What do you mean by “ideal message delivery”? Uniform atomic broadcast? If so, is every system built around Paxos or Raft “negligent”?

                                                                                                    1. 2

                                                                                                      The awful “jittering” of everything in PS1 has nothing to do with “subpixel rendering” it’s because textures aren’t perspective corrected, and it’s awful. Does anybody actually think it’s a desirable target to recreate?

                                                                                                      1. 9

                                                                                                        Well… there’s “technically desirable” and “artistically desirable”.

                                                                                                        When the PS1 came out this was more or less state of the art in terms of real time 3D rendering for consumer devices and everyone with stakes in the game could agree that it was awful and that the next generation of real time 3D renderers should not suffer from this problem. It’s definitely not technically desirable, as in, if what you’re looking for is to render the most life-like images you can, it’s a really bad idea.

                                                                                                        However, if what you’re after is to capture that feeling of playing PS1 at a friend’s house after school, or to paint contemporary themes with 1994 brushes, or anything of the sort, then it absolutely is a desirable target to recreate. Nostalgia is the most obvious reason but it’s not just nostalgia – we enjoy, say, medieval art, and modern drawings in medieval style, (hopefully) without being nostalgic for the pre-antibiotic times of warring petty barons and serfdom.

                                                                                                        (Full disclosure: some of the retro stuff I write is commercial so there’s always the riskt that I’m deliberately bending facts so as not to be out of a job ;-) ).

                                                                                                        1. 2

                                                                                                          It gives some of us a nice nostalgic feeling

                                                                                                          (He addresses texture mapping a bit further down)

                                                                                                          1. 1

                                                                                                            the game Ultrakill replicates it and it looks alright. I’ve also seen many horror games replicate it because of the weird, praecox feeling you get when textures start moving and wobbling on you.

                                                                                                          1. 22

                                                                                                            Help me. I am not a Ruby person and I probably never will be. I just cannot figure out what Hotwire is. I have read this post, I have read the Hotwire homepage, I have googled it, I cannot for the life of me figure out what it actually is.

                                                                                                            I keep reading “HTML over the Wire” but that is how normal websites work. What is different?

                                                                                                            1. 45

                                                                                                              you know how HTML is usually transferred over HTTP? well, Hotwire just transfers that same HTML over a different protocol named WebSockets.

                                                                                                              that’s it. that’s the different.

                                                                                                              1. 9

                                                                                                                Thanks. Your explanation saves me countless hours.

                                                                                                                1. 9

                                                                                                                  what on earth

                                                                                                                  1. 28

                                                                                                                    For others who may be confused: This is dynamic HTML over web sockets. The idea is that the client and server are part of a single application which cooperate. Clients request chunks of HTML and use a small amount of (usually framework-provided) JS to swap it into the DOM, instead of requesting JSON data and re-rendering templates client-side. From the perspective of Rails, the advantage of this is that you don’t need to define an API – you can simply use Ruby/ActiveRecord to fetch and render data directly as you’d do for a non-interactive page. The disadvantage is that it’s less natural to express optimistic or client-side-only behaviors.

                                                                                                                    1. 1

                                                                                                                      Ah. That … sort of makes sense, honestly?

                                                                                                                      1. 7

                                                                                                                        it’s not really a new idea, they stole it from phoenix liveview. https://hexdocs.pm/phoenix_live_view/Phoenix.LiveView.html

                                                                                                                        1. 6

                                                                                                                          … which I guess in turn is the spiritual successor to TurboLinks

                                                                                                                          1. 5

                                                                                                                            It’s worth noting that idea isn’t new with Phoenix. Smalltalk’s Seaside web framework had this back in 2005 or so (powered by Scriptaculous), and WebObjects was heading down that path before Apple killed it.

                                                                                                                            Phoenix LiveView looks great, and is likely the most polished version of the concept I’ve seen, don’t get me wrong. But I don’t think DHH is “stealing” it from them, either.

                                                                                                                            1. 5

                                                                                                                              There’s a lot of implementations of it, here’s a good list: https://github.com/dbohdan/liveviews

                                                                                                                              1. 4

                                                                                                                                Not unsurprising, given that Phoenix is designed by a prolific Rails contributor. There’s a healthy exchange.

                                                                                                                                https://contributors.rubyonrails.org/contributors/jose-valim/commits

                                                                                                                                1. 5

                                                                                                                                  Moreover, DHH has been experimenting with these techniques since roughly the same time that Elixir (not even Phoenix) first appeared: https://signalvnoise.com/posts/3697-server-generated-javascript-responses

                                                                                                                              2. 1

                                                                                                                                In addition to brandonbloom’s excellent points, I personally liken it to the Apple/Oxide push for hardware and software being signed together. This type of tech makes it much easier to keep frontend and backend designs coherent. It is technically possible to do with SPAs and the APIs they rely on but the SPA tech makes it too easy (at an organizational level) to lose track of the value of joint design. This tech lowers the cost of that joint design and adds friction to letting business processes throw dev teams in different directions.

                                                                                                                          2. 4

                                                                                                                            Right. These days, a lot of normal websites transfers JSON over WebSockers and piece the HTML together on the client side with JavaScript, and Hotwire is a reaction against that, bringing it back to transfering HTML.

                                                                                                                            1. 3

                                                                                                                              Really? How is that the answer to all of life’s problems, the way DHH is carrying on?

                                                                                                                              1. 10

                                                                                                                                because now your (data) -> html transformations all exist in one place, so you don’t have a server-side templating language and a client-side templating language that you then have to unify. also the performance of client-side rendering differs more substantially between devices, but whether that’s a concern depends on your project and its audience. just different strategies with different tradeoffs.

                                                                                                                                1. 4

                                                                                                                                  Yes, these are good points; another is that you have basically no client-side state to keep track of, which seems to be the thing people have the most trouble with in SPAs.

                                                                                                                                  1. 1

                                                                                                                                    Great for mail clients and web stores, the worst for browser games.

                                                                                                                                    1. 1

                                                                                                                                      depends on the game. A game that runs entirely in the browser makes no sense as a Hotwire candidate. But a for a game that stores its state in the server’s memory it’s probably fine. Multiplayer games can’t trust the state in the client anyway.

                                                                                                                                      1. 1

                                                                                                                                        If you genuinely need offline behavior, or are actually building a browser-based application (e.g., a game, or a photo editor, etc.), something like Hotwire/Liveview makes a great deal of sense.

                                                                                                                                        At least until you get to a certain scale, at which point you probably don’t want to maintain a websocket if you can help it, and if you are, it’s probably specialized to notifications. By that time, you can also afford the headcount to maintain all of that. :)

                                                                                                                            1. 6

                                                                                                                              I wonder if people realise that writing a diatribe in which you “cleverly” come up with “proof” that every bad experience somebody has is “akshoolee really the user’s fault”… doesn’t improve your software?

                                                                                                                              1. 1

                                                                                                                                Generally no. But we are bitten too often by dumb users and problems that need education to fix. And, ultimately, at some point, you need to trust the user.

                                                                                                                                It’s hard when you need to make it easier for a user, and often comes at the expense of power or elegance; the twin gods of application design don’t like being slighted.

                                                                                                                                1. 1

                                                                                                                                  […] mak[ing] it easier for a user […] often comes at the expense of power or elegance

                                                                                                                                  I, too, had assumed that; but through watching this saga, I am quickly learning to question that assumption. In my opinion, good design hides rare or dangerous functionality, while keeping it available and discoverable for those who truly need it.

                                                                                                                              1. 1

                                                                                                                                I have NFI what on earth this is? The website is even less useful.