1. 5

    I’ve been hugely impressed by everyone I’ve met from the Canadian Digital Service, so I’m glad they’re also gracious in handling down-the-dependency-chain bugs like this. Found them to be very dedicated people who care a lot about the work.

    I am a little surprised that there is no OS-level service for “reachability” that react-native-netinfo could have polled instead? I guess on Android it would probably just go to Google anyway.

    1. 2

      Oh wow, this is great. Has anyone made one recently?

      1. 2

        I’ve never heard of it before this article, but I’d sure like to build one. A quick google showed these projects, and there are probably lots more:

        • mikroGalaksija, an FPGA re-implementation;
        • Lots of schematics posted here, including a CMOS-happy variant, but it looks like the author didn’t finish building theirs yet

        Seems like the original schematics might be picky about using a modern CMOS Z80 (maybe the clock is a little dirty?) as opposed to one of the older variants, but you can still find those.

      1. 2

        This is super cool. For a future crazy spare-time project, I was thinking of doing a Forth cartridge for the Famicom with a REPL using the BASIC keyboard. Of course, this is a much more practical thing.

        1. 6

          Does anyone know of any good resources to undertake the creation of a custom chip? I’ve dabbled in FPGAs and VHDL before but don’t really understand how you go from that to a custom buildable chip.

          1. 2

            Same here. I would like to replicate the function of some old 80s custom chips, but I’m not sure where to start other than “VHDL.” It seems like the OpenROAD project is named as a major EDA component of this initiative, but I’m also unclear on how I’d use it.

            Lots of reading ahead!

          1. 3

            How small is the irreparable damage? Is there a picture of it?

            1. 22

              I’m fairly sure the fuses are a part of the CPU die, so they’re only several microns in size.

              1. 9

                @dstaley is right it’s just an extra small metal trace somewhere inside the die. Like any other fuse you put a high enough voltage across it and it pops. Then the CPU can just check the continuity with a lower voltage to check if it has been blown or not.

                This has some die photos of one example: https://archive.eetasia.com/www.eetasia.com/ART_8800717286_499485_TA_9b84ce1d_2.HTM

                1. 7

                  Like others have said, these fuses are on the CPU die itself. Fuses like this are actually quite common on microcontrollers for changing various settings, or locking the controller to disallow it from being programmed after its received the final production programming.

                  1. 6

                    The Xbox360 also did something similar with its own “e-fuses.” I assume it’s standard practice now.

                    1. 4

                      Yup, it’s entirely standard for any hardware root of trust. There are a couple of things that they’re commonly used for:

                      First, per-device secrets or unique IDs. Anything supporting remote attestation needs some unique per-device identifier. This can be fed (usually combined with some other things) into a key-derivation function to generate a public-private key pair, giving a remote party a way of establishing an end-to-end secure path with the trusted environment. This is a massive oversimplification of how you can spin up a cloud VM with SGX support and communicate with the SGX enclave without the cloud provider being able to see your data (the most recent vulnerability allowed the key that is used to sign the public key along with the attestation to be compromised). There are basically two ways of implementing this kind of secret:

                      1. PUFs. Physically Unclonable Functions are designs that take some input (can be a single bit) and produce an output that is stable but depends on details beyond the manufacturing tolerances of a particular process. The idea is that two copies of exactly the same mask will produce versions that generate different outputs. PUFs are a really cool idea and an active research area, but they’re currently quite big (expensive) and not very reliable (so you need a larger area and some error correction to get a stable result out of them).
                      2. Fuses. Any hardware root of trust will have a cryptographic entropy source. On first boot, you read from this, filter it through something like Fortuna (possibly implemented in hardware) to give some strong random numbers, and then burn something like a 128- or 256-bit ID into fuses. Typically with some error correction.

                      The MAC address (as @Thra11 pointed out) is a simple case of needing a unique identifier.

                      The second use is monotonic counters for roll-back protection. A secure boot chain works (again, huge oversimplifications follow, ) by having something tiny that’s trusted, which checks the signature of the second-stage boot loader and then loads that. The second-stage checks the signature of the third stage, and so on. Each one appends the values that they’re producing to a hash accumulator. Again, with a massive oversimplification, you may end up with hash(hash(first stage) + hash(second stage) + hash(third stage) …), where hash(first stage) is computed in hardware and everything else is in software (and where each hash function may be different). You can read the partial value (or, sometimes, use a key derived from it but not actually read the value) at any point, so at the end of second-stage boot you can read hash(hash(first stage) + hash(second stage)) and can then use that in any other crypto function, for example by starting the third-stage image with the decryption key or signature for the boot image encrypted with a key derived from the hashes of all of the allowed first and second-stage boot chains. You can also then use it in remote attestation, to prove that you are running a particular version of the software.

                      All of this relies on inductive security proofs. The second stage is trusted because the first stage is attested (you know exactly what it was) and you trust the attested version. If someone finds a vulnerability in version N, you want to ensure that someone who has updated to version N+1 can never be tricked into installing version N.

                      Typically, the first stage is a tiny hardware state machine that checks the signature and version of a small second-stage that is software. The second-stage software can have access to a little bit of flash (or other EEPROM) to store the minimum trusted version of the third-stage thing, so if you find a vulnerability in the third-stage thing but someone has already updated with an image that bumped the minimum-trusted-third-stage-thing-version then the second-stage loader will refuse to load an earlier version. But what happens if there’s a vulnerability in the second-stage loader? This is typically very small and carefully audited, so it shouldn’t be invalidated very often (you don’t need to prevent people rolling back to less feature-full versions, only insecure ones, so you typically have a security version number that is distinct from the real version number and invalidate it infrequently). Typically, the first-stage (hardware) loader keeps a unary counter in fuses so that it can’t possibly be rolled back.

                      1. 1

                        (You likely know this, but just in case:)

                        What you describe above is a strong PUF; weak PUFs (that do not take inputs) also exist, and - in particular - SRAM PUFs (which you can get from e.g. IntrinsicID) are pretty reliable.

                        (But indeed, lots of PUFs are research vehicles only.)

                    2. 4

                      Examples of fuses I’ve seen used in i.MX6 SOCs include setting the boot device (which, assuming it’s fused to boot from SPI or onboard mmc effectively locks out anyone trying to boot from USB or SD card), and setting the mac address.

                  1. 2

                    It’s awesome to see the community is alive like this. I need to get my hands on a x68000.

                    1. 3

                      Aim for a desktop-style (“Pro”) model over the twin-towers machines; battery leakage is less dramatic when it happens (and it has happened) and most of the parts (e.g. floppy drives) are easier to service. I have a twin-tower ACE that is slowly being rewired by hand to bypass all the corroded traces on the I/O board.

                    1. 6

                      I have a Commodore 64 that’s in perfectly working order that I’ve been planning on gutting and stuffing a Raspberry Pi into… I had no idea they were worth anything now.

                      1. 6

                        sell the board, or part out the chips at least! SID’s & VIC’s are getting scarce.

                        1. 2

                          I’m surprised people haven’t designed open source compatible replacements. There are a ton of custom parts in the enthusiast space .. or at least what I can tell from 8-Bit Guy, LGR and visiting local retro shows.

                          1. 3

                            There are a lot of different aftermarket PLAs (as there are a lot of different types of PLA in use, some not compatible with others). The SID seems very hard to replicate as part of its unique sound is related (I heard) to the now-obsolete fabrication method.

                        2. 4

                          My God, your comment reminded me of this relic from 2004.

                          1. 2

                            I thought about selling it, but my parents would be upset with me. After all, this was a very expensive gift and it meant a lot to them to give to me.

                            hehe

                            1. 1

                              To be fair, I have none of the accessories (including cartridges), and my intent was to run a C64 emulator on boot to get most of the same experience but with modern ports. The keyboard is garbage, and the C64 was discontinued 3 years before I was even born, so I don’t have any sense of nostalgia for it. I may be more inclined to sell it to someone who cares more about it though.

                            2. 2

                              Personally, as someone who lived in the 80s with no computer until I was old enough to have a job and make enough money to buy one myself, in the 90s, I desire a C64 to experience / learn about some of the software / games of the era, and I think many are in a similar boat.

                              However, with time, those devices are actually fully understandable. So, I think there is some demand just to learn about computer architecture basics, even if there has been 40 years of innovation beyond them.

                            1. 2

                              I have an external screen that has a similar issue, but it’s even worse: when I have too much of a certain color on screen, the screen shuts down and has to be powered off for quite a while (an hour?) before it starts working again.

                              1. 1

                                That’s interesting! How close can you get to that colour? Does it have to be exact?

                                1. 1

                                  Due to the effects I haven’t experimented a lot, unfortunately!

                                  1. 1

                                    Once you get tired of that monitor, you’ll have to sacrifice it for science. Lots of cool questions for something that breaks in such a funky way.

                                    e.g. Does it work if you pull the power cord and plug it back in? Is the monitor getting kinda old and maybe has dying components inside?

                                    1. 1

                                      Pulling the power cord didn’t help, IIRC. Something likely overheated, since it came back to life after a while. This happened when the screen was fairly new, and I’ve had it for a few years after the incident so it’s probably some bug and not faulty components 😊

                                      1. 1

                                        Bad solder joint somewhere in the driver circuit for that colour?

                              1. 3

                                This is really cool! I actually have one of these Epiphan VGA-grabbers and it’s never quite worked perfectly for me to capture the weirdo 640x200 interlaced video coming out of a PC-88. Maybe now I can just patch it :)

                                1. 3

                                  Very cool. My own keymaps for key converters are all hugely long code-generated switch statements sourced from a DSL, but I definitely first reached for a C macro until I started to make mistakes (double-mapping keys, not mapping others, etc). A Rust macro feels like it could be the perfect in-betweener.

                                  1. 10

                                    The second FPGA (reportedly a Cyclone 10) is pretty exciting to me. It could be a fun entry point into learning about FPGA development since you already have all these peripherals attached to it. Hopefully someone starts porting some of the MISTer cores to add more systems.

                                    1. 3

                                      I’ve used Elm professionally for about a year and a half, maybe longer, and we’ve had more or less the exact same experience. I’ve also recently lived in Norway, and have used Vy (before it’s name change).

                                      It has made me more curious about more ‘advanced’ functional languages like PureScript, and I wish there was a good comparison of Elm to it (and also other languages such as ReasonML).

                                      1. 3

                                        I don’t use Elm very much but I have used a good amount of Purescript (and Typescript), and having simple JS interop is such a game changer. Really wish that it could stick around.Elm works well for a lot of UI stuff but it’s just annoying to have to “do stuff” when I have some existing JS.

                                        Though kinda ironically I think Purescript is a really good backend language. Effects systems are super valuable on server-side code but don’t actually tend to be that helpful in frontend code (beyond how they’re used for dictionaries).

                                        1. 2

                                          Are effect systems not useful for error management for in frontend code?

                                          1. 1

                                            Effects systems are super valuable on server-side code but don’t actually tend to be that helpful in frontend code (beyond how they’re used for dictionaries).

                                            Mind elaborating on this?

                                            1. 1

                                              I wrote a thing about this a couple years ago, basically the granular effects system of purescript let you track DB reading and DB writing separately, to let you establish stronger guarantees about what kind of IO is happening in a function

                                              http://rtpg.co/2016/07/20/supercharged-types.html

                                              Some other examples over the years that I would find useful:

                                              • an effect like “DB access of unknown result size”. For example a raw SQL query on a table without any sort of LIMIT could potentially return a lot of data at once, whereas in web requests you want to have consistent, fast replies (so you should opt for pagination instead)

                                              • an effect like “accesses multi-tenant data”. This will let you determine what parts of your system are scoped down to a single tenant and which parts are scoped to multi-tenant data

                                              • An effect like “makes IO requests to an external service”. You could use this to qualify your SLAs in certain parts of your system (your own system probably should be built for higher expected uptime than some random third party)

                                              • An effect like “locks this resource”. You can use this to make sure you unlock the resource at a later data. Normally this is accomplished through a simple bracket pattern but with effects systems you can opt for different styles.

                                              Because the row polymorphism doesn’t force you to stack monads you avoid the mess of writing up a bunch of transformers and unwrapping/rewrapping items to get at the innards.

                                            2. 1

                                              Since we’re a startup we had effectively 0 existing code to worry about, which I guess make it easier in that regard for us.

                                              Do you use the more ‘complex’ functional concepts like type classes?

                                              I wonder if anyone would pick Purescript over Haskell for the backend.

                                            3. 2

                                              For us, at least, the relative simplicity of Elm was a big part of being able to move existing developers over to it.

                                              There were definitely times I missed metaprogramming or other fancy features, but I think hiding a lot of ‘magic’ is intimidating (even though they’re coming from a JavaScript background, where cheaty magic is just how things get done).

                                              Our experience also aligned with this post. We didn’t yet fall into the trap of trying to use elm-git-install for sub-dependencies (maintaining first-level dependencies is time-consuming enough) but it’ll probably happen sooner or later.

                                              1. 2

                                                You’re right about the simplicity making it easy for new developers to get started and feel confident with the language.

                                                I personally feel like I’m now wanting more complex concepts (metaprogramming, type classes, ..).

                                                We too haven’t reached that point, but I could see that in a year.

                                                1. 2

                                                  I really did miss type classes. Partially specified types is an ‘ok’ workaround in some cases, but it still felt incomplete. Especially not being able to extend Num.

                                                  1. 1

                                                    What do you think about nikita-volkov/typeclasses? Useful or not really?

                                                    1. 2

                                                      Wasn’t aware of that package at the time (loved the book, BTW) - but it looks like it might clean up some of the ugliness we had around non-primitive comparable. I’ll have to see if that works out when I’m on that project next. Thanks!

                                            1. 1

                                              Cute. I seem to think I have some code for doing Toolbox alerts and menus in a Git repository somewhere, but I’m thinking the author probably moved on in the past two years.

                                              1. 2

                                                You should write a blog entry of your own, so your knowledge can be preserved. I think most of this kind of thing is still trapped in period books.

                                              1. 8

                                                archive.org has a nice setup where you can upload your old HyperCard stacks and it will wrap them in an emulator (Mini vMac) prepared to run the stack in your web browser. I will have to go over my old floppies to see which of my old stacks is the least personally embarrassing and preserve them for history!

                                                https://archive.org/details/hypercardstacks

                                                Here’s a stack by Cyan which drops cool tips & tricks I wish I had known about back in the 90s: https://archive.org/details/hypercard_beyondhc

                                                1. 2

                                                  I’m looking forward to trying the PC-8801 port of this. It’s awesome that there is such a dedicated community on this project.

                                                  1. 2

                                                    That’s all true, and a little annoying, but the annoyances to ship your own small-time web software aren’t that much less. You still need all of the server and SSL stuff. Fixing bugs will be much easier if you set up transport of your logs to some service instead of just logging to the drive. Ditto if you set up a reasonably automated deployment system, so you can update to address, say, security issues without following a 3-page error-prone checklist. You’ll probably need to write a couple of semi-custom SystemD scripts or something too. And secure automated database backup. Some level of monitoring and auto-alerting. Keep an eye out for security alerts in any of the software you use. Periodically check for package updates.

                                                    And that’s all for a single server. Things get tricker yet if you need more than one.

                                                    1. 2

                                                      I think the real comparison should be with shipping desktop software on an operating system that uses a package manager, like most Linux distributions, or even MacOS for the most part (Homebrew, MacPorts, or the App Store). You write the program and perhaps provide packages, or OS distributions package it themselves. Maybe you write some metadata for the “store” page like in elementaryOS, but that’s all.

                                                      1. 2

                                                        Maybe it depends on experience and personal preference, but I’d say the “taxes” on web development are much worse than desktop development. There’s server configuration, hosting, database configuration, a million flavor-of-the-week Javascript frameworks to choose between, the server side language/runtime, etc. and all of it has to be baby sat to keep up to date with security patches and bugs. And the entire setup has to be duplicated in a testing environment and on each dev machine.

                                                        And desktop development isn’t even as bad as the article makes it out to be. Like why write your own installer and logging library? Use an existing library unless you have a good reason. And auto-update is a Windows problem - Linux package managers and Apple’s App Store mostly solve that.

                                                        That does leave the product website, but that’s kind of open ended and depends on the situation. It could be 15 minutes to write some HTML and take a few screenshots, or in a big organization it could be a months long project designing a fancy web portal and integrating it into an existing site.

                                                        1. 5

                                                          Writing a desktop app for a single OS / arch is dead easy even for a web guy like me. VB6 came out in 1998 and is still a better development environment than exists for the web today.

                                                          The ‘tax’ on desktop development is getting people to use the damn thing (see kazlumeus for stats on that), and supporting multiple platforms, and syncing user data across their multiple devices.

                                                          1. 2

                                                            VB6 came out in 1998 and is still a better development environment than exists for the web today.

                                                            All while programmers condemn RAD tools, and glorify doing the shit work by hand. It’s embarrassing that you need to get a real programmer for simple applications (CRUD, formulae, API callers, whatever).

                                                            1. 1

                                                              It’s a shame that Yahoo Pipes didn’t survive. I saw a lot of really interesting projects that were effectively huge chunks of APIs glued together with it, and it feels like it was an idea which was ahead of its time.

                                                            2. 1

                                                              I don’t know how much stock to put in the Bingo Card Generator example. The UI works well as a website, and though I don’t know his market, I feel like generating bingo cards might be a one off thing for many people - I can understand them being hesitant to install and buy a desktop app for something they might not do again. And his app used Java, which meant downloading the JRE from Oracle (or Sun back then), and that obviously complicates things a lot.

                                                              Syncing user data should be the user’s responsibility, IMO. There are plenty of options (DropBox, thumb drives, network drives) for them to do it themselves if they want to. The app should make it easy, but shouldn’t be the one doing it.

                                                              Supporting multiple platforms isn’t terrible, and it’s not unique to desktop apps (i.e. Chrome vs Firefox vs Safari vs mobile browser, etc.). Qt, FLTK, and wxWidgets work great across platforms, have bindings to a bunch of languages, and can be bundled with the app in most cases.

                                                              1. 2

                                                                That’s a perfectly reasonable approach to sync for some apps; for others it’s entirely wrong.

                                                                For instance, I have no use for a todo list or note-taking that doesn’t sync automatically across devices.

                                                        1. 7

                                                          This is likely the last CPU Architecture to be added to mainline Linux.

                                                          Why?

                                                          1. 8

                                                            it’s a statement made by Linus Torvalds referring to how, since RISC-V is very open, then it’s likely everyone will prefer using it instead of making their own thing.

                                                              1. 1

                                                                A world in which we can’t keep reinventing the wheel would be a sad one indeed.

                                                                1. 2

                                                                  Nothing is stopping you reinventing any wheels here.

                                                                  This is just a statement about the economics of designing and taping out your own new CPU architecture, producing documentation, compilers, and all OS and tooling support…does doing this help you widen your moat?

                                                                  Seems the current mix of ARM/MIPS/RISC-V quality and licensing options makes doing all the above a needless resource sink and distraction for companies of all shapes and sizes.

                                                                  For those hobbyists still wanting to dabble in this, fortunately in recent years, FPGA kit has come to a price point that is within reach of most of our wallets.

                                                              2. 3

                                                                You got this dead on! Theres several reasons but the main one is that its just not worth creating a new ISA for every application, ARM and RISC-V can cover the majority of applications.

                                                                C-SKY was partially created because the Chinese wanted their own Architecture.

                                                            1. 8

                                                              It’s really satisfying to fix up old, broken code and get it running again, especially when the results are as visible as a game.

                                                              1. 1

                                                                Totally! A while back I ported BSD rain to Linux (original source is here). I was surprised my distro didn’t have it. While it wasn’t broken (it obviously compiled on NetBSD), it was nice to have an old friend back.

                                                              1. 3

                                                                This makes me think of an alternate reality where Japan has a significant chip designer like Intel or AMD. Does anyone know why Japan didn’t end up with a company like this? I do know Sony developed the PS3 chip, but it was in partnership with IBM.

                                                                1. 4

                                                                  The various Renesas chipsets (SH4, etc) are in tons of embedded systems. You probably have a couple in your car.

                                                                  1. 3

                                                                    They also powered all later Sega consoles (specifically, the 32X, Dreamcast, and Saturn). They’re really nice chips, and also now have open-source clones that I’ve heard positive things about.

                                                                  2. 3

                                                                    There actually were several, but they focused more on the microcontroller and embedded markets and not the high-end.

                                                                    NEC was making 8088 clones, which they followed with a 32-bit architecture. They even launched the PC Engine to compete with Nintendo (successfully in Japan, but flopped when brought to the US as the TurboGrafx-16)

                                                                    Hitachi was a second-source manufacturer of the 68000 and others for a long time. They had the H8 family, and as barbeque mentioned, the SuperH family.

                                                                    It should be noted that Renesas owns most of this now.

                                                                    1. 1

                                                                      does Softbank acquiring ARM count?

                                                                    1. 20

                                                                      I implemented a system that used MongoDB in 2011. The author of this piece didn’t hit on the main reason we used Mongo: automatic failover.

                                                                      I didn’t need high write performance or big horizontal scalability. Easy schema modifications turned out to be more pain than they were worth, but that wasn’t the main reason.

                                                                      The main reason was that I was running ops by myself at our startup. My real job was backend development. At the time, MongoDB was the only (“free”) system that we evaluated that would automatically fail over while I was sleeping, or busy working on the actual product. Elections were great (provided the quorum was set up properly). It was like RAID for databases. RDS wasn’t ready for us yet (and especially: nobody was offering managed Postgres, with failover, at a reasonable price, then). We didn’t strictly need high availability, but it sure was nice to nearly-never have to step away from Real Work™ to tend to a broken-on-the-ops-side DB (and I spent that time fixing yellow Elasticsearch clusters).

                                                                      I think it was the right choice at the time, yes. But times have changed. New [small] stuff goes into managed RDS, DynamoDB, or gets set up so a DBA team can take it over (usually after migrating out of the RDS we used for development, if that’s what they want).

                                                                      1. 8

                                                                        Could you please go into a little more detail about why you had such frequent database failures? Always curious to hear war stories like this so I can maybe identify a failure in the future and look super smart :)

                                                                        1. 6

                                                                          We didn’t have frequent MongoDB database failures. I suppose it’s a little like that old Head & Shoulders commercial… “Why are you using H&S? You don’t have dandruff!” (Causality is hard (-: ).

                                                                          I have a general policy of building everything to be HA, even when it’s not strictly needed by the business side, as long as they’re on board. When I shifted trust to the cloud (AWS in this case), instead of building out redundant pipes to our own DCs, this is cheap insurance once it’s set up.

                                                                          The frequent Elasticsearch failures were due to lots of things back then. ES has matured, but I still pay Amazon to run it when I can.