1. 2

    Programmer time is more expensive than CPU cycles. Whining about it isn’t going to change anything, and spending more of the expensive thing to buy the cheap thing is silly.

    1. 15

      The article makes a good counterpoint:

      People migrate to faster programs because faster programs allow users to do more. Look at examples from the past: the original Python-based bittorrent client was quickly overtaken by the much faster uTorrent; Subversion lost its status as the premier VCS to Git in large part because every operation was so much faster in Git; the improved grep utility, ack, is written in Perl and waning in popularity to the faster silversurfer and ripgrep; the Electron-based editor Atom has been all but replaced by VSCode, also Electron-based, but which is faster; Chrome became the king of browsers largely because it was much faster than Firefox and Internet Explorer. The fastest option eventually wins. Would your project survive if a competitor came along and was ten times faster?

      1. 7

        That fragment is not great in my opinion. Svn-git change is about the whole architecture not about implementation speed. A lot of speedup in that case comes from not going to the server for information. Early git was mainly shell and perl too so it doesn’t quite mesh with the python example before. Calling out Python for BitTorrent is not a great example either - it’s an io-heavy app rather than processing heavy.

        Vscode has way more improvements over atom and available man-hours. If it was about performance, sublime or some other graphical editor would take over from them.

        I get the idea and I see what the author is aiming for, but those examples don’t support the post.

        1. 3

          I was an enthusiastic user of BitTorrent when it was released. uTorrent was absolutely snappier and lighter than other clients. Specifically the oficial Python GUI. It blew the competition out of the watter because it was superior in its pragmacy. Perhaps python Vs c is an oversimplification. The point would still hold even in the presence of two programs written in the same language.

          The same applies for git. It feels snappy and reliable. Subversion and cvs, besides being slow and clunky, would gift you a corrupted repo every other Friday afternoon. Git pulverised this non sense brutally quick.

          The point is about higher quality software built with better focus, making reasonable use of resources, resulting in superior experience for the user. Not so much about a language being better than others.

          1. 2

            BitTorrent might seem IO heavy these days; ironically this is because it has been optimised to death; but you are revising history if you think that it’s not CPU/Memory intensive and doing it in python would be crushingly slow.

            The point at the end is a good one though, you must agree:

            Would your project survive if a competitor came along and was ten times faster?

            1. 1

              I was talking about the actual process not the specific implementation. You can make BitTorrent cpu-bound in any language with inefficient implementation. But the problem itself is IO bound, so any runtime should also be able to get there. (Modulo the runtime overhead)

          2. 2

            This paragraph popped out at me as historically biased and lacking in citations or evidence. With a bit more context, the examples are hollow:

            • The fastest torrent clients are built on libtorrent (the one powering rtorrent), but rtorrent is not a very common tool
            • Fossil is faster than git
            • grep itself is more popular than any of its newer competitors; it’s the only one shipped as a standard utility
            • Atom? VSCode? vim and emacs are still quite popular! Moreover, the neovim fork is not more popular than classic vim, despite speed improvements
            • There was a period of time when WebKit was fastest, and browsers like uzbl were faster than either Chrome or Firefox at rendering, but never got popular

            I understand the author’s feelings, but they failed to substantiate their argument at this spot.

            1. 2

              This is true, but most programming is done for other employees, either of your company or another if you’re in commercial business software. These employees can’t shop around or (in most cases) switch, and your application only needs to be significantly better than whatever they’re doing now, in the eyes of the person writing the cheques.

              I don’t like it, but I can’t see it changing much until all our tools and processes get shaken up.

            2. 11

              But we shouldn’t ignore the users’ time. If the web app they use all day long take 2-3 seconds to load every page, that piles up quickly.

              1. 7

                While this is obviously a nuanced issue, personally I think this is the key insight in any of it, but the whole “optimise for developer happiness/productivity, RAM is cheap, buy more RAM (etc)” line totally ignores it. Let alone the “rockstar developer” spiel. Serving users’ purposes is what software is for. A very large number of developers lose track of this because of an understandable focus on their own frustrations, and tools that make them more productive are obviously valuable, as well as meaning they have a less shitty time, which is meaningful and valuable. But building a development ideology around that doesn’t make this go away. It just makes software worse for users.

                1. 7

                  Occasionally I ask end-users in stores, doctor’s offices, etc what they think of the software they’re using, and 99% of the time they say “it’s too slow and crashes too much.”

                  1. 2

                    Yes, and they’re right to do so. But spending more programming time using our current toolset is unlikely to change that, as the pressures that selected for features and delivery time over artefact quality haven’t gone anywhere. We need to fix our tools.

                  2. 5

                    In an early draft, I cut out a paragraph about what I am starting to call “trickle-down devenomics”; this idea that if we optimize for the developers, users will have better software. Just like trickle-down economics, it’s just snake oil.

                    1. 1

                      Alternately, you could make it not political.

                      Developers use tools and see beauty differently from normal people. Musicians see music differently, architects see buildings differently, and interior designers see rooms differently. That’s OK, but it means you need software people to talk to non-software people to figure out what they actually need.

                2. 3

                  Removed because I forgot to reload and multiple others gave the same argument I did in the meantime already.

                  1. 3

                    I don’t buy this argument. In some (many?) cases, sure. But once you’re operating at any reasonable scale you’re spending a lot of money on compute resources. At that stage even a modest performance increase can save a lot of money. But if you closed the door on those improvements at the beginning by not thinking about performance at all, then you’re kinda out of luck.

                    Not to mention the environmental cost of excessive computing resources.

                    It’s not fair to characterize the author as “whining about” performance issues. They made a reasonable and nuanced argument.

                    1. 3

                      Yes. This is true so long as you are the only option. Once there is a faster option, the faster option wins.

                      Why?

                      Not for victories in CPU time. The only thing more scarce and expensive than programmer time is…. User Time. Minimize user time and pin cpu usage at 100% and nobody will care until it causes user discomfort or loss of user time elsewhere.

                      Companies with slow intranets cause employees to become annoyed, and cause people to leave at some rate greater than zero.

                      A server costs a few thousand dollars on the high end. A smaller program costs a few tens of thousands to build and maintain and operate. That program can cost more than hundreds of thousands in management and engineer and sales and marketing and HR and quality and training and compliance salaries to use it over its life.

                    1. 2

                      Finishing up the first half of Practical TLA+ and trying to work a couple of examples on my own, mostly. On the tech side. Lots of personal things keeping me busy, too.

                      1. 3

                        Disclaimer: It’s been around 10 years since I was involved in creating a social network.

                        If I remember correctly we had a few discussions, because social networks are kind of a poster example for graphs, but one point was sharding. There were known-to-work solutions if you were using relational DBs but we didn’t know of any proven thing for the graph databases around. Also we weren’t a startup so no “revolutionizing the world by inventing the best new graph db”, and in the end a lot of it was trying to not spend the innovation budget on something like this, boring is better, as we were supposed to hand off the thing to the company we were building it for, so operationally it should be easy to run. Of course easy is relative, but mysql (or postgres?) was a known fact. Oh, and not to forget: in the end you can model the graph relations quite easily with a RDBMS, so why bother?

                        1. 5

                          Adding on this, to the best of my knowledge there are still no great ways to partition graphs across nodes and it’s been shown to be a hard problem (NP hard or complete, depending on some factors). Intuitively this should make sense: social graphs (for example) have very small diameters and you’re likely to have a lot of edge crossings between different compute nodes.

                          That said, I don’t think that distributing a graph DB is a big deal. You really don’t need it: (almost) any graph you would work with will fit in memory (it might be a big machine but it’ll still fit!) and replication is an easier problem.

                          Disclosure: I worked for TigerGraph. Not sure disclosure is even necessary—I left in 2015. But I have a financial interest in graph databases.

                          1. 3

                            social networks are kind of a poster example for graphs, but one point was sharding.

                            Yup. Even with a relational database, it’s hard. To exaggerate only slightly, this is one of the major reasons LiveJournal (the first real social network) failed. Performance was killing them, and they had to throttle growth via invite-codes for a few years while they worked out how to do clustering with MySQL, and cache as much as possible in RAM. This was c.2001, before any of this was common, and some of the tools BradFitz invented, like memcached, are still in use today. End result was they couldn’t grow fast enough, and they didnt have resources to evolve the UX or feature set enough. By the time Facebook caught on, they were doomed (sob!)

                            1. 1

                              It’s a bit unfair to blame everything on performance though, because I remember LiveJournal in its heyday, but there were a few reasons I never signed up. I hated the design, it didn’t look like a social network, it looked like a collection of blogs to me (without support to bring your own domain) and also 90% of where I ended up by clicking random links to LJ, it was only fanfic. I never noticed anything slow, but I can’t exactly tell you if this was indeed more 2001 or more 2007.

                              1. 3

                                Yeah, as I said, the constant firefighting to keep the servers from overloading meant they couldn’t evolve the UI and feature set. Another big reason was that, after MovableType bought them, they made the fatal mistake of building an all-new service, which looked very nice but flopped, instead of improving LJ.

                                1. 1

                                  Social networks can exist and even thrive without frills: viz. Hacker News.

                                  1. 1

                                    Sure, and maybe I misunderstood the point, but I wanted to talk about “social networks” as they are kinda clearly defined to the general populace, like Facebook and not any community of online people. Might be narrow, might be wrong, but the frills were not the point, but that “era” of consolidation towards single closed mass networks and not anything open or small.

                            1. 2

                              Man, how often did I wish when doing database query generation “please $DB, just let me hand you queries in your own internal format, instead of making me write SQL”.

                              So I agree with the criticism of the author, but as mentioned in the end of the article … what to do with all the knowledge we have now?

                              It seems that many previous alternative were not successful:

                              • ORM and “NoSQL” – junior developer ideas that turned out to be worse than using SQL
                              • GraphQL – lacks joins, so is hardly a credible replacement
                              • Other promising approaches seem to end up getting commercialized, sold and closed down.

                              So what can “we” do, to improve the state of the art?

                              In my opinion: demonstrating and specifying a practical, well-designed¹ language that various databases could implement on their own as an alternative to SQL.


                              ¹ Not going into that here.

                              1. 4

                                A Datalog variant. I had a lot of fun playing with differential-datalog, and there’s Logica which compiles datalog to SQL for a variety of SQL dialects.

                                1. 3

                                  What do you mean by ORMs and NoSQL being “junior developer ideas”?

                                  1. 8

                                    Relational data maps pretty well to most business domains. NoSQL and ORMs throw out the baby with the bathwater for different reasons (turfing the entire model with NoSQL, trying force two different views of modelling the domain to kiss with ORMs). Anything that makes a join hard isn’t a good idea when an RDBMS is involved.

                                    I think what might be interesting is instead of contorting the RDBMS model to work with OO languages like ORMs do, do the reverse: a relational programming language. I don’t know what that could look like though.

                                    1. 4

                                      Relational data maps pretty well to most business domains. NoSQL and ORMs throw out the baby with the bathwater for different reasons (turfing the entire model with NoSQL, trying force two different views of modelling the domain to kiss with ORMs). Anything that makes a join hard isn’t a good idea when an RDBMS is involved.

                                      Agreed with the conclusion and I have nothing good to say about most NoSQL systems other than that rescuing companies from them is a lucrative career, but I think this criticism of ORMs is over-broad.

                                      A good ORM will take the scut-work out of database queries in a clean and standardized-across-codebases way without at all getting in your way when accessing deep database features, doing whatever joins you want, etc. I’d throw modern Rails ActiveRecord (without getting in the weeds on Arel) as a good ORM which automates the pointless work while staying out of your way when you want to do something more complicated.

                                      A bad ORM will definitely try to “hide” the database from you in ways that just make everything way too complicated the second you want to do something as simple as specify a specific type of join. Django’s shockingly awful “QuerySet” ORM definitely falls in this camp, as I’ve recently had the misfortune of trying to make it do fairly simple things.

                                      1. 3

                                        I’m very surprised to see ActiveRecord used as an example if something which stays out if your way. The amount of time I have spent fighting to get it to generate the SQL I wanted is why I never use it unless I’m being paid a lot to do so.

                                        1. 1

                                          Really? It’s extremely easy to drop to raw SQL, and to intermix that with generated statements – and I’ve done a lot of really custom heavy lifting with it over the years. Admittedly this may not be well documented and I may just be taking advantage of a lot of deep knowledge of the framework, here.

                                          The contrast is pretty stark to me compared to something like Django, whose devs steadfastly refuse to allow you to specify joins and which, while offering a raw SQL escape hatch, has a different intermediate result type for raw SQL queries (RawQuerySet vs QuerySet) with different methods, meaning details of how you formed a query (raw vs ORM API) leak into all consuming layers and you can’t switch one for the other at the data layer without breaking everything upstream (hilariously the accepted community “solution” to this seems to be to write your raw query, then wrap an ORM API call around it that generates a “select * from (raw query)”??).

                                          ActiveRecord has none of these issues in my experience – joins can be manually specified, raw clauses inserted, raw SQL is transparent and intermixable with ORM statements with no impedance mismatch. Even aggregation/deaggregation approaches like unions, unnest(), etc that breaks the table-to-class and column-to-property assumptions can still be made to work cleanly. It’s really night and day.

                                    2. 6

                                      Not the commenter you’re asking but they’re both tools that reduce initial amount of learning at the cost of abandoning features that make complexity and maintainability easier to handle.

                                      1. 5

                                        I’m not sure that that’s true, though. ORMs make a lot of domain logic easier to maintain—it’s not about reducing initial learning, it’s about shifting where you deal with complexity (is it complexity in your domain or in scaling or ???). Similar with NoSQL—it’s not a monolithic thing at all and most of those NoSQL databases require similar upfront learnings (document DBs, graph DBs, etc. all require significant upfront learning to utilize well). Again, it’s a trade off of what supports your use case well.

                                        I’m just not sure what the GP meant by “junior developer ideas” (it feels disparaging of these, and those who use them, but I won’t jump to conclusions). They also are by no stretch “worse than using SQL”. They are sometimes worse and sometimes better. Tradeoffs.

                                        1. 2

                                          I agree with you on the tradeoffs. I’m not sure I agree on the domain logic thing. In my experience orms make things easier until they don’t, in part because you’ve baked your database schema into your code. Sometimes directly generating queries allows changes to happen in the schema without the program needing to change its data model immediately.

                                  1. 1

                                    So, where does that take us? Well, we want to do engineering to solve problems. I think that means, practically speaking, we need to focus on the specification and verification steps

                                    Respectfully, I disagree. After thrashing around in this area for many years, I’m convinced that the code doesn’t matter, although coding is the thing most of us love doing.

                                    Tests are the things that are most important, whether implied or explicit. Most tests, of course, are sub rosa; they seem too trivial for anybody to ever write down. (Until they fail, of course)

                                    What we need to work on are truly modular and composable tests, something that scales out. my formal methods should revolve around those. I understand that this can be construed as saying the same thing, but there are several subtle differences between the two concepts.

                                    1. 2

                                      I think, at least directionally, we agree: the focus is on showing that your code does what you want it to do, and the code itself doesn’t really matter.

                                    1. 3

                                      Where I found myself disagreeing was with the initial premise … “The job of a software engineer is not to produce code, but to solve problems”.

                                      While that is the job for some, a large proportion have the job of modeling a system in code, and the two are different. A problem that culminates in an algorithm is certainly a candidate for a specification that can be formalized. Is that the case for a complex problem domain under consideration that is a model of a system - real or imagined?

                                      That new innovative but complex international payroll system that needs to be built, faces an entirely different set of issues. How are functional requirements and domain constraints determined and modeled in code? How are non-functional requirements determined and met within project/product and organizational constraints?

                                      Perhaps at question is the simplistic viewpoint that all software development is just the transformation of data. I disagree with such a contention. While inherently accurate, it’s similar to saying that the materials used to construct furniture is just atoms.

                                      How to model complex systems in code in any formal and predictable way remains out of reach, and is why calls for applying the terms “formal methods” and “engineering” to software development is inapplicable for many?

                                      1. 4

                                        Even cutting-edge domain problems still use a lot of basic infrastructural code to run. Does the international payroll system use a rules engine? Is it running batch jobs as background tasks, or triggering anything reactively? How are you distinguishing employee benefits from benefit classes? All of those are places where formal methods help.

                                        More fundamentally, it’s possible for stated requirements to not cover a situation, or give you contradictory results on how to handle a weird edge case. Shouldn’t it be possible to find those out before you’ve written all the code?

                                        1. 2

                                          Sure there are parts where formal methods might be applied. But I contend they simply are not “the” answer to what is fundamentally a different problem. As is the case all too often in our profession, I see the original author as generalizing all software development towards a particular “silver bullet”.

                                          1. 2

                                            Yeah, I definitely don’t think formal methods are “the” answer. I think there are more people who could use them that don’t than people who don’t need them but think they do, but it’s ludicrous to collapse all of software engineering down to a specific technique.

                                            1. 2

                                              I didn’t mean to (and I don’t think I did) generalize that all development has to lean toward formal methods, nor is it a silver bullet. But I think software engineering as a field needs it, and there are a lot of bits of critical code that need it, and as a trend I think we’re pushing toward making it easier to use and using it in more places.

                                              1. 2

                                                I do agree. Formal methods can certainly be of benefit.

                                                My suggestion is to better define where, for what types of software and under what circumstances.

                                                That is what would really benefit the developer community in my opinion.

                                            2. 1

                                              As an aside and by way of apology, I was updating my answer as you replied. It my weird (?) way of writing something and then changing it for the first few minutes after first submitting it. HN handles this well by giving 10 minutes in which one can change a reply before it is published. Not sure if Lobsters does the same.

                                          1. 7

                                            Have you looked at Ada/SPARK? Since SPARK is an Ada subset and both can exist in the same project, you write SPARK when you need verification and Ada if there’s areas where you don’t need it or don’t need it yet. There’s even proved libraries written which are in the Ada/SPARK package manager, Alire.

                                            1. 1

                                              It’s on my list of things to look into! TLA+ is earlier on my list but Ada and SPARK are near the top, too.

                                            1. 2

                                              Mostly hanging out with family, but I’m also planning on writing at least one blog post (checked that one off the list), hacking at my side project, and maaaaaybe starting to learn TLA+.

                                              1. 6

                                                I wonder if this model could be turned on it’s head to score each region of code by its expected bugginess.

                                                “danger (or congrats): no one in the history of time has ever written anything like this before”

                                                1. 1

                                                  Although, I suppose the output might be less than useful: “I have a vague feeling that this might be wrong but I can’t explain why”.

                                                  1. 6

                                                    That could be incredibly useful as a code review tool! Kind of gives you a heatmap of which spots to focus most attention on as a code reviewer. I want it yesterday.

                                                    1. 1

                                                      Hm; OTOH, if a bug is common enough to have a major presence in the input corpus, I see how it could result in a false positive “green” mark for a faulty fragment of code… super interesting questions, for sure :) maybe it should only be used for “red” coloring, the rest being left as “unrated”.

                                                1. 4

                                                  I am surprised that the author’s bio doesn’t mention that she’s a cofounder of KittyCAD, which seems to be super early and is focusing on addressing these problems. At the very least, it means she has a financial stake in this area and it seems like it’s reasonable to disclose.

                                                  1. 2

                                                    Huh, is that a spin-off from Oxide’s experience building server cases/racks/etc? That would be like, business level yak shaving :)

                                                  1. 11

                                                    This raises an interesting point and one that I think browsers could address. Much like they carefully craft information display to help people recognize being on genuine/secure sites, one can imagine a browser feature where if the link text contains a link and that link doesn’t match the href, a warning is displayed.

                                                    1. 3

                                                      It’s honestly somewhat shocking that with the amount of thought that goes into other browser security features, this one was overlooked. This also feels particularly dangerous in HTML email.

                                                      1. 1

                                                        I like the idea. I’m somewhat concerned about false positives with URLs that don’t match but redirect or even with typos. So the warning has to take that into account and shouldn’t be too scary. Or you need to perform a request to detect redirects or implement a heuristic etc, but all that is prone to mistakes.

                                                        This is likely more fun on mobile which doesn’t have a mouseover (do people still check that?)

                                                        1. 1

                                                          Facebook’s tracking links would break. Probably a good thing, but I’m not sure everyone will agree.

                                                          1. 1

                                                            It could be presented in a way that just makes that more obvious and gives the user the choice to follow the displayed link or the href. Then users could chose to be tracked or not. Definitely with you that it’s a good thing and that not everyone will think that—clearly the people who make trackers think they’re okay at least.

                                                            1. 1

                                                              Same with the links in google search results, twitter (t.co), slack, …

                                                          1. 2

                                                            This week wrapping up some stuff that bled into the weekend from last week…

                                                            Aside from that considering two different job offers (still not the exact types of jobs I want - but a pay bump) one is a CTH and one isn’t. Both seem interesting although the work is pretty much the same as my current job.

                                                            August will be 4 years of being fully remote for me, so I’m also considering moving. This brings up a bunch of things I’m trying to figure out - Denver seems pretty nice. I haven’t had a separate bedroom/office setup before and a 2 bedroom seems almost necessary for that.

                                                            1. 4

                                                              Having a separate office is a total game changer for remote work and makes the higher cost of a 2-bedroom worth it in my opinion. I’ve been working remote for similar time (5 years) and had a separate office for most of it and it helps with separation. Added benefit is often a more professional video call background.

                                                            1. 1

                                                              Mostly trying to recover from another migraine and start tracking down the causes, then finish some reading and finish up a blog post I’m working on—it’s my first attempt at illustrating one of my own posts.

                                                              1. 13

                                                                This guy is doing it wrong:

                                                                You want to install a new package? Oh, open a shell into your container, then install it.

                                                                If you need to add dependencies, rebuild. If you really want to run a command inside your running container, docker exec is there for you to wrap with your favorite build tool.

                                                                And I don’t have a pat answer for compiling in a container, but I will say that as a deployment artifact I don’t think you should have your compiler in there.

                                                                If you really want a vm like experience then by all means use a vm.

                                                                1. 9

                                                                  That’s missing the point a bit, but I addressed that in the previous paragraph in the article:

                                                                  Or you can install the dependencies into the image, but then you have to rebuild the entire image and install every dependency to just add one for testing something out, adding unnecessary friction.

                                                                  It’s quite possibly the right way to do it but for a development environment I’ve just found that’s a ton of extra friction to test things out and experiment.

                                                                  The broader point is that everything you do interactively while developing requires some extra steps, and in my experience development takes some interactive steps (to launch a test runner, to run a dev server, etc.).

                                                                  1. 1

                                                                    in my experience development takes some interactive steps (to launch a test runner, to run a dev server, etc.).

                                                                    There’s no reason any of those things need to be interactive for day to day development. Docker is popular with people who want to minimize manual steps. It sounds like your preferred style of working is quite different.

                                                                    Incidentally, you can and should run multiple containers for different processes. This usually wouldn’t take extra steps day to day, because the usual pattern is wrap the whole command line with a command runner (make, rake, whatever you like).

                                                                    Fwiw, I have different patterns for different languages. For Python I usually develop in a virtual env, and replicate that in my docker container for deployment.

                                                                    For golang, I compile outside of the container, and deploy the container to kube for testing.

                                                                    Assuming you actually want to use docker as part of your development, I’d suggest you figure out the additional tooling to make it convenient for you.

                                                                1. 2

                                                                  This is a problem I am currently facing at work.

                                                                  We run virtual events of up to fifty participants, for 99% of the time the server stack remains at near zero load until the event gets moved into a new phase and there is a tsunami of activity (~100 req/min to ~100 req/sec) for a brief moment in time.

                                                                  I’d like to be able to automate load testing but it would involve spinning up 50 user agents able to connect a fake video and audio source to our av solution, and then running through at least one cycle of our event schedule entering data, navigating, etc.

                                                                  So far my solution to this has been to get everyone in the office together a few times a month for a real world load test. If someone can offer a better more automated alternative I would snap it up.

                                                                  1. 2

                                                                    Oh hey, that’s almost identical to the traffic patterns that we have! We have very low traffic until the moment things start, then it’s a huge spike of traffic as everyone starts at the same time.

                                                                    If you can generate enough load with everyone in the office that seems pretty reasonable (especially since you could do that by dogfooding your product for something like a town hall or internal conference). If you do look into automating more I’d love to have a chat about the problems you run into or how you approach it because it sounds like we’re facing similar problems in different markets.

                                                                  1. 1

                                                                    I can’t really vouch for it (since I’ve used it for a sum total of about 30 minutes) but https://k6.io/ was pretty easy to get going.

                                                                    1. 2

                                                                      That’s about how long I’ve used k6s for as well! We evaluated it before writing our own tool, tried so hard to avoid writing our own tool. The scenario we have is basically we open a WebSocket and also do HTTP calls, and we have to trigger those HTTP calls based on the WebSocket but we also have a periodic background task that runs every 30 seconds (heartbeat). I couldn’t get all of that to fit into k6s, but could be I missed something in the docs.

                                                                    1. 4

                                                                      There is one tool that I’ve enjoyed: Locust

                                                                      1. 1

                                                                        Locust is pretty nice, although it falls into the same bucket of tools that end up supporting simple workloads and failing to capture more complexity. We used it at work until we couldn’t. We were probably using it wrong but when we integrated WebSocket calls into it, it just became super difficult to maintain our code in it and its performance suffered—we had to use nearly as many resources as we did for the system under test! It’ll be a little more feasible for this stuff when it supports async I think.

                                                                      1. 5

                                                                        lately the bright spots in my day have been the little things:

                                                                        • reading books with my daughter; it’s basically her favorite thing to do with me. raising a child introduces stress but it offsets work stress, because it puts things in necessary perspective.
                                                                        • espresso. this is a hobby and obsession, and brings me so much joy
                                                                        • running. it keeps me sane.
                                                                        1. 5

                                                                          I’m going to hopefully get started on fiddling with some electronics! I got some hall-effect sensors and a Raspberry Pi and I want to see if I can wire them up to detect where pieces are on a chessboard. The end goal is being able to play chess on a real board with a friend in another state.

                                                                          I’m also going to be carving out time for some writing. I have a few blog posts in a half-finished state, and a few ideas I want to keep exploring.

                                                                          1. 1

                                                                            Nice! I’ve been playing chess a lot since March online and being a hardware nerd too that sounds like a fun project. Hall effect sensors wouldn’t have been my first guess for piece detection, but I haven’t worked with them a whole lot. Let us know how it goes!

                                                                            1. 2

                                                                              It should be fun! I have a friend who’s working on a board, too, so the goal is for us to be able to play at the end of it.

                                                                              I’m just about completely new to hardware, so I basically just searched for a sensor that would detect magnets, and then I plan to use pieces with magnets in them. If you were detecting pieces, how would you approach it? Do you see drawbacks to hall effect sensors?

                                                                              1. 1

                                                                                I think hall effect sensors have a decent shot of being able to detect a piece on a square - you should definitely give it a go! Have you thought about how you’d differentiate between piece types? I guess if the program knows the board state to start you can ‘diff’ which square was vacated and which is now occupied, and be able to evaluate if that was legal.

                                                                                I think the fancy tournament DGT boards use RFID, almost like high end casino card games. That would be quite the hobby scale project

                                                                                1. 2

                                                                                  Ooooh I hadn’t heard of DGT board before! Now that I know about these… I might end up getting one and just doing smaller electronics experiments.

                                                                                  I was/am going to differentiate piece types purely based on the moves happening. It’s not perfect, but I think/hope it’ll do the job. We’ll see!