1. 3

    This is really cool! I actually have one of these Epiphan VGA-grabbers and it’s never quite worked perfectly for me to capture the weirdo 640x200 interlaced video coming out of a PC-88. Maybe now I can just patch it :)

    1. 3

      Very cool. My own keymaps for key converters are all hugely long code-generated switch statements sourced from a DSL, but I definitely first reached for a C macro until I started to make mistakes (double-mapping keys, not mapping others, etc). A Rust macro feels like it could be the perfect in-betweener.

      1. 10

        The second FPGA (reportedly a Cyclone 10) is pretty exciting to me. It could be a fun entry point into learning about FPGA development since you already have all these peripherals attached to it. Hopefully someone starts porting some of the MISTer cores to add more systems.

        1. 3

          I’ve used Elm professionally for about a year and a half, maybe longer, and we’ve had more or less the exact same experience. I’ve also recently lived in Norway, and have used Vy (before it’s name change).

          It has made me more curious about more ‘advanced’ functional languages like PureScript, and I wish there was a good comparison of Elm to it (and also other languages such as ReasonML).

          1. 3

            I don’t use Elm very much but I have used a good amount of Purescript (and Typescript), and having simple JS interop is such a game changer. Really wish that it could stick around.Elm works well for a lot of UI stuff but it’s just annoying to have to “do stuff” when I have some existing JS.

            Though kinda ironically I think Purescript is a really good backend language. Effects systems are super valuable on server-side code but don’t actually tend to be that helpful in frontend code (beyond how they’re used for dictionaries).

            1. 2

              Are effect systems not useful for error management for in frontend code?

              1. 1

                Effects systems are super valuable on server-side code but don’t actually tend to be that helpful in frontend code (beyond how they’re used for dictionaries).

                Mind elaborating on this?

                1. 1

                  I wrote a thing about this a couple years ago, basically the granular effects system of purescript let you track DB reading and DB writing separately, to let you establish stronger guarantees about what kind of IO is happening in a function

                  http://rtpg.co/2016/07/20/supercharged-types.html

                  Some other examples over the years that I would find useful:

                  • an effect like “DB access of unknown result size”. For example a raw SQL query on a table without any sort of LIMIT could potentially return a lot of data at once, whereas in web requests you want to have consistent, fast replies (so you should opt for pagination instead)

                  • an effect like “accesses multi-tenant data”. This will let you determine what parts of your system are scoped down to a single tenant and which parts are scoped to multi-tenant data

                  • An effect like “makes IO requests to an external service”. You could use this to qualify your SLAs in certain parts of your system (your own system probably should be built for higher expected uptime than some random third party)

                  • An effect like “locks this resource”. You can use this to make sure you unlock the resource at a later data. Normally this is accomplished through a simple bracket pattern but with effects systems you can opt for different styles.

                  Because the row polymorphism doesn’t force you to stack monads you avoid the mess of writing up a bunch of transformers and unwrapping/rewrapping items to get at the innards.

                2. 1

                  Since we’re a startup we had effectively 0 existing code to worry about, which I guess make it easier in that regard for us.

                  Do you use the more ‘complex’ functional concepts like type classes?

                  I wonder if anyone would pick Purescript over Haskell for the backend.

                3. 2

                  For us, at least, the relative simplicity of Elm was a big part of being able to move existing developers over to it.

                  There were definitely times I missed metaprogramming or other fancy features, but I think hiding a lot of ‘magic’ is intimidating (even though they’re coming from a JavaScript background, where cheaty magic is just how things get done).

                  Our experience also aligned with this post. We didn’t yet fall into the trap of trying to use elm-git-install for sub-dependencies (maintaining first-level dependencies is time-consuming enough) but it’ll probably happen sooner or later.

                  1. 2

                    You’re right about the simplicity making it easy for new developers to get started and feel confident with the language.

                    I personally feel like I’m now wanting more complex concepts (metaprogramming, type classes, ..).

                    We too haven’t reached that point, but I could see that in a year.

                    1. 2

                      I really did miss type classes. Partially specified types is an ‘ok’ workaround in some cases, but it still felt incomplete. Especially not being able to extend Num.

                      1. 1

                        What do you think about nikita-volkov/typeclasses? Useful or not really?

                        1. 2

                          Wasn’t aware of that package at the time (loved the book, BTW) - but it looks like it might clean up some of the ugliness we had around non-primitive comparable. I’ll have to see if that works out when I’m on that project next. Thanks!

                1. 1

                  Cute. I seem to think I have some code for doing Toolbox alerts and menus in a Git repository somewhere, but I’m thinking the author probably moved on in the past two years.

                  1. 2

                    You should write a blog entry of your own, so your knowledge can be preserved. I think most of this kind of thing is still trapped in period books.

                  1. 8

                    archive.org has a nice setup where you can upload your old HyperCard stacks and it will wrap them in an emulator (Mini vMac) prepared to run the stack in your web browser. I will have to go over my old floppies to see which of my old stacks is the least personally embarrassing and preserve them for history!

                    https://archive.org/details/hypercardstacks

                    Here’s a stack by Cyan which drops cool tips & tricks I wish I had known about back in the 90s: https://archive.org/details/hypercard_beyondhc

                    1. 2

                      I’m looking forward to trying the PC-8801 port of this. It’s awesome that there is such a dedicated community on this project.

                      1. 2

                        That’s all true, and a little annoying, but the annoyances to ship your own small-time web software aren’t that much less. You still need all of the server and SSL stuff. Fixing bugs will be much easier if you set up transport of your logs to some service instead of just logging to the drive. Ditto if you set up a reasonably automated deployment system, so you can update to address, say, security issues without following a 3-page error-prone checklist. You’ll probably need to write a couple of semi-custom SystemD scripts or something too. And secure automated database backup. Some level of monitoring and auto-alerting. Keep an eye out for security alerts in any of the software you use. Periodically check for package updates.

                        And that’s all for a single server. Things get tricker yet if you need more than one.

                        1. 2

                          I think the real comparison should be with shipping desktop software on an operating system that uses a package manager, like most Linux distributions, or even MacOS for the most part (Homebrew, MacPorts, or the App Store). You write the program and perhaps provide packages, or OS distributions package it themselves. Maybe you write some metadata for the “store” page like in elementaryOS, but that’s all.

                          1. 2

                            Maybe it depends on experience and personal preference, but I’d say the “taxes” on web development are much worse than desktop development. There’s server configuration, hosting, database configuration, a million flavor-of-the-week Javascript frameworks to choose between, the server side language/runtime, etc. and all of it has to be baby sat to keep up to date with security patches and bugs. And the entire setup has to be duplicated in a testing environment and on each dev machine.

                            And desktop development isn’t even as bad as the article makes it out to be. Like why write your own installer and logging library? Use an existing library unless you have a good reason. And auto-update is a Windows problem - Linux package managers and Apple’s App Store mostly solve that.

                            That does leave the product website, but that’s kind of open ended and depends on the situation. It could be 15 minutes to write some HTML and take a few screenshots, or in a big organization it could be a months long project designing a fancy web portal and integrating it into an existing site.

                            1. 5

                              Writing a desktop app for a single OS / arch is dead easy even for a web guy like me. VB6 came out in 1998 and is still a better development environment than exists for the web today.

                              The ‘tax’ on desktop development is getting people to use the damn thing (see kazlumeus for stats on that), and supporting multiple platforms, and syncing user data across their multiple devices.

                              1. 2

                                VB6 came out in 1998 and is still a better development environment than exists for the web today.

                                All while programmers condemn RAD tools, and glorify doing the shit work by hand. It’s embarrassing that you need to get a real programmer for simple applications (CRUD, formulae, API callers, whatever).

                                1. 1

                                  It’s a shame that Yahoo Pipes didn’t survive. I saw a lot of really interesting projects that were effectively huge chunks of APIs glued together with it, and it feels like it was an idea which was ahead of its time.

                                2. 1

                                  I don’t know how much stock to put in the Bingo Card Generator example. The UI works well as a website, and though I don’t know his market, I feel like generating bingo cards might be a one off thing for many people - I can understand them being hesitant to install and buy a desktop app for something they might not do again. And his app used Java, which meant downloading the JRE from Oracle (or Sun back then), and that obviously complicates things a lot.

                                  Syncing user data should be the user’s responsibility, IMO. There are plenty of options (DropBox, thumb drives, network drives) for them to do it themselves if they want to. The app should make it easy, but shouldn’t be the one doing it.

                                  Supporting multiple platforms isn’t terrible, and it’s not unique to desktop apps (i.e. Chrome vs Firefox vs Safari vs mobile browser, etc.). Qt, FLTK, and wxWidgets work great across platforms, have bindings to a bunch of languages, and can be bundled with the app in most cases.

                                  1. 2

                                    That’s a perfectly reasonable approach to sync for some apps; for others it’s entirely wrong.

                                    For instance, I have no use for a todo list or note-taking that doesn’t sync automatically across devices.

                            1. 7

                              This is likely the last CPU Architecture to be added to mainline Linux.

                              Why?

                              1. 8

                                it’s a statement made by Linus Torvalds referring to how, since RISC-V is very open, then it’s likely everyone will prefer using it instead of making their own thing.

                                  1. 1

                                    A world in which we can’t keep reinventing the wheel would be a sad one indeed.

                                    1. 2

                                      Nothing is stopping you reinventing any wheels here.

                                      This is just a statement about the economics of designing and taping out your own new CPU architecture, producing documentation, compilers, and all OS and tooling support…does doing this help you widen your moat?

                                      Seems the current mix of ARM/MIPS/RISC-V quality and licensing options makes doing all the above a needless resource sink and distraction for companies of all shapes and sizes.

                                      For those hobbyists still wanting to dabble in this, fortunately in recent years, FPGA kit has come to a price point that is within reach of most of our wallets.

                                  2. 3

                                    You got this dead on! Theres several reasons but the main one is that its just not worth creating a new ISA for every application, ARM and RISC-V can cover the majority of applications.

                                    C-SKY was partially created because the Chinese wanted their own Architecture.

                                1. 8

                                  It’s really satisfying to fix up old, broken code and get it running again, especially when the results are as visible as a game.

                                  1. 1

                                    Totally! A while back I ported BSD rain to Linux (original source is here). I was surprised my distro didn’t have it. While it wasn’t broken (it obviously compiled on NetBSD), it was nice to have an old friend back.

                                  1. 3

                                    This makes me think of an alternate reality where Japan has a significant chip designer like Intel or AMD. Does anyone know why Japan didn’t end up with a company like this? I do know Sony developed the PS3 chip, but it was in partnership with IBM.

                                    1. 4

                                      The various Renesas chipsets (SH4, etc) are in tons of embedded systems. You probably have a couple in your car.

                                      1. 3

                                        They also powered all later Sega consoles (specifically, the 32X, Dreamcast, and Saturn). They’re really nice chips, and also now have open-source clones that I’ve heard positive things about.

                                      2. 3

                                        There actually were several, but they focused more on the microcontroller and embedded markets and not the high-end.

                                        NEC was making 8088 clones, which they followed with a 32-bit architecture. They even launched the PC Engine to compete with Nintendo (successfully in Japan, but flopped when brought to the US as the TurboGrafx-16)

                                        Hitachi was a second-source manufacturer of the 68000 and others for a long time. They had the H8 family, and as barbeque mentioned, the SuperH family.

                                        It should be noted that Renesas owns most of this now.

                                        1. 1

                                          does Softbank acquiring ARM count?

                                        1. 20

                                          I implemented a system that used MongoDB in 2011. The author of this piece didn’t hit on the main reason we used Mongo: automatic failover.

                                          I didn’t need high write performance or big horizontal scalability. Easy schema modifications turned out to be more pain than they were worth, but that wasn’t the main reason.

                                          The main reason was that I was running ops by myself at our startup. My real job was backend development. At the time, MongoDB was the only (“free”) system that we evaluated that would automatically fail over while I was sleeping, or busy working on the actual product. Elections were great (provided the quorum was set up properly). It was like RAID for databases. RDS wasn’t ready for us yet (and especially: nobody was offering managed Postgres, with failover, at a reasonable price, then). We didn’t strictly need high availability, but it sure was nice to nearly-never have to step away from Real Work™ to tend to a broken-on-the-ops-side DB (and I spent that time fixing yellow Elasticsearch clusters).

                                          I think it was the right choice at the time, yes. But times have changed. New [small] stuff goes into managed RDS, DynamoDB, or gets set up so a DBA team can take it over (usually after migrating out of the RDS we used for development, if that’s what they want).

                                          1. 8

                                            Could you please go into a little more detail about why you had such frequent database failures? Always curious to hear war stories like this so I can maybe identify a failure in the future and look super smart :)

                                            1. 6

                                              We didn’t have frequent MongoDB database failures. I suppose it’s a little like that old Head & Shoulders commercial… “Why are you using H&S? You don’t have dandruff!” (Causality is hard (-: ).

                                              I have a general policy of building everything to be HA, even when it’s not strictly needed by the business side, as long as they’re on board. When I shifted trust to the cloud (AWS in this case), instead of building out redundant pipes to our own DCs, this is cheap insurance once it’s set up.

                                              The frequent Elasticsearch failures were due to lots of things back then. ES has matured, but I still pay Amazon to run it when I can.

                                          1. 1

                                            With is a great addition. I’m going to use that a lot.

                                            1. 2

                                              The Roslyn compiler has .With methods like this, iirc. The generated method looks quite clean and idiomatic with those default values set to the fields.

                                            1. 3

                                              Makes me wish I was still working in C#. This compiler flag is definitely getting turned on with any C# 8 project I start from now on.

                                              1. 7

                                                Good on the author for explaining the terms early on and sticking to them. I wonder what has happened to all those lighting-control devices now that the firm has gone under?

                                                1. 3

                                                  Mountains of e waste.

                                                  1. 1

                                                    Which the market prefers for continuing revenue. Best way to reduce waste is to keep recycling older electronics since they (a) just require buying/shipping rather than manufacture and (b) last longer. Maybe also throw in dedicated, cheap devices that act as link encryptors or something so nothing untrusted can reach the insecure devices. Just little connectors between those devices and the Internet which do a VPN or something. That’s a long-running practice in high-security with stuff like secure, PCI cards for networking and Type 1 Link Encryptors for Ethernet (or VPN’s for IP).

                                                    For example on recycling side, my “new” Thinkpad 420 @vermaden recommended is flying compared to my last PC. Loving that Core i7. Got it cheaper than some netbooks, too. Only things bothering me is Fn on left instead of Control and Page Back/Forward keys right above Back/Forward. Thank goodness for browsers saving state of current page. I think I can just start using right control to optimize Fn vs Ctrl issue. Idk about other one yet.

                                                    1. 2

                                                      I’m not sure about The T420 but in my T460s and T470p, you can swap the roles of control and fn in bios.

                                                      1. 1

                                                        Yeah it is there. Thanks for the tip! I’m not sure if it will work due to keyboard design. I’m definitely trying it, though. :)

                                                1. 5

                                                  Spending it walking around Tokyo, and avoiding Akihabara after getting stuck for 7 hours wondering around all the stores :)

                                                  1. 3

                                                    Akihabara is a place high-up on my bucket list of travel destinations. I hardly play video games anymore but it sounded like such a magical place reading about it as a kid in the midwest. I hope to see it one day, just to take it in.

                                                    1. 2

                                                      It is not quite as techie as it used to be (the maid cafe business is just too profitable), but it is a great experience and still has a lot of that magic thanks to hardcore shops like kadenken.

                                                      1. 2

                                                        I’m just now reading about maid cafes and, yeah, that does sound really creepy (and sadly profitable). I’m glad to hear there are still tech / gaming shops there, though.

                                                      2. 2

                                                        I hope you get to see it in the near future, it is incredible and hard to escape once you enter.

                                                    1. 8

                                                      One must also make sure that the feature flags are ephemeral. We have 310 different configuration options in our application at the time of writing this. Many of them are flags to enable or disable certain features. This makes sense because not every customer wants it alike.

                                                      I would have used a more temporal system for flags if we were to implement them during A/B testing or similar. Every configuration option that has every existed must be kept around for legacy reasons.

                                                      1. 10

                                                        This was a hard-learned lesson for me. I have “temporary” feature flags that have been in production for nearly a decade now. Any feature flag system I’d be integrating today needs some kind of expiry date and notification process.

                                                        1. 2

                                                          Yeah, I think you have to have a process in place to integrate feature flagged stuff into your product after a while so you don’t have to deal with them a decade later. That’s, of course, can be done if you have SaaS or single-install-source solution. If you have a situation like Enpo above with different, customized, installations for each client you are pretty much toast.

                                                        2. 2

                                                          That is … mind boggling. How on earth do you even attempt to test any amount of the 796945721845778842475919139075620414921136393375300542318387282866272704053390430589730860455603066078739412704697191536795836240056697896249671921625859110264739008206646881054299114131923686294626708836563443497056478753259286321601841784170972278535996798204378021886389407248684599038298054366260840551142981370313185123638250325060383962886770938435048882386658766596481560405515515254199457134973524360454582648135836670684347420975064802837641388048575559158251497106943523511427144443326952041559678971773755844300171372821558992540618349430789236271936082094239920238839249942858712222326623974397184065086164132932404402666686761 setup options?

                                                          How many of those flags are single-deploy / single-user ones written sort of on-demand for a certain client and hence only used by one deploy? Was doing it as a fork / patch / (other way using version control) ever considered? How is it day to day to work with?

                                                          Sorry, I have so many questions – it is just such an extreme case I am so curious how it actual works day to day – is it pain most days or just something you don’t think about?

                                                        1. 3

                                                          This was very cool when you demoed it before in a previous thread, glad to see it has come along even more since then!

                                                          1. 4

                                                            One thing I changed in my work PRs is explaining in details why this PR exist and what are the big changes, what can break, what to pay attention to… instead of just referencing a ticket. I think it makes it easier to review.

                                                            1. 7

                                                              Having worked on a code base that is on its 3rd version control system and 4th ticketing system, I’ll tell you - put the comment in the commit message! It’s more likely that the info will survive there than migrated correctly in the ticketing system. This can make for some duplication, but isn’t too bad.

                                                              The other point I usually make in favor of commit messages is that unlike other code comments, they reflect an exact point in time and

                                                              1. 1

                                                                they reflect an exact point in time and

                                                                stroke?

                                                                1. 3

                                                                  oops, thanks for catching that.

                                                                  an exact point in time and

                                                                  code changes afterwards might make them (obviously) out of sync with the comment. Regular comments can linger past the code they are commenting, but a commit message is tied to the code at that time.

                                                              2. 4

                                                                I have found that often my “rationale” sections on PRs are the longest-lasting written description of the business reasons behind changes. JIRA is not a reliable source of human information so much as it is a glorified to-do list on most projects I’ve been on.

                                                              1. 11

                                                                I never knew that the “break” in breakpoints referred to a physical wire break until now. Have to assume that I’d spend more time reading my code, and debugging much less, if I had to get off my butt and yank wires out of the computer while it was running.

                                                                1. 5

                                                                  There’s a bit of folk etymology that claims Grace Hopper was the first to use the term ‘bug’ to mean a defect in a computer (and by extension, a program.) Alas, it’s a little too neat to be historically accurate.

                                                                  1. 3