Threads for magikid

    1. 5

      I think this is a great idea, but I am anticipating folks explainIng why it isn’t.

      1. 22

        The main argument against is that even if you assume good intentions, it won’t be as close to production as an hosted CI (e.g. database version, OS type and version, etc).

        Lots of developers develop on macOS and deploy on Linux, and there’s tons of subtle difference between the two systems, such as case sensitivity of the filesystem, as well as default ordering just to give an example.

        To me the point of CI isn’t to ensure devs ran the test suite before merging. It’s to provide an environment that will catch as many things as possible that a local run wouldn’t be able to catch.

        1. 6

          To me the point of CI isn’t to ensure devs ran the test suite before merging.

          I’m basically repeating my other comment but I’m amped up about how much I dislike this idea, probably because it would tank my productivity, and this was too good as example to pass up: the point of CI isn’t (just) to ensure I ran the test suite before merging - although that’s part of it, because what if I forgot? The bigger point, though, is to run the test suite so that I don’t have to.

          I have a very, very low threshold for what’s acceptably fast for a test suite. Probably 5-10 seconds or less. If it’s slower than that, I’m simply not going to run the entire thing locally, basically ever. I’m gonna run the tests I care about, and then I’m going to push my changes and let CI either trigger auto-merge, or tell me if there’s other tests I should have cared about (oops!). In the meantime, I’m fully context switched away not even thinking about that PR, because the work is being done for me.

          1. 4

            You’re definitely correct here but I think there are plenty of applications where you can like… just trust the intersection between app and os/arch is gonna work.

            But now that I think about it, this is such a GH-bound project and like… any such app small enough in scope or value for this to be worth using can just use the free Actions minutes. Doubt they’d go over.

            1. 6

              any such app small enough in scope or value for this to be worth using can just use the free Actions minutes.

              Yes, that’s the biggest thing that doesn’t make sense to me.

              I get the argument that hosted runners are quite weak compared to many developer machines, but if your test suite is small enough to be ran on a single machine, it can probably run about as fast if you parallelize your CI just a tiny bit.

            2. 2

              I wonder if those differences are diminished if everything runs on Docker

              1. 5

                With a fully containerized dev environment yes, that pretty much abolish the divergence in software configuration.

                But there are more concern than just that. Does your app relies on some caches? Dependencies?

                Where they in a clean state?

                I know it’s a bit of an extreme example, but I spend a lot of time using bundle open and editing my gems to debug stuff, it’s not rare I forget to gem pristine after an investigation.

                This can lead me to have tests that pass on my machine, and will never work elsewhere. There are millions of scenarios like this one.

                1. 3

                  I was once rejected from a job (partly) because the Dockerfile I wrote for my code assignment didn’t build on the assessor’s Apple Silicon Mac. I had developed and tested on my x86-64 Linux device. Considering how much server software is built with the same pair of configurations just with the roles switched around, I’d say they aren’t diminished enough.

                  1. 1

                    Was just about to point this out. I’ve seen a lot of bugs in aarch64 Linux software that don’t exist in x86-64 Linux software. You can run a container built for a non-native architecture through Docker’s compatibility layer, but it’s a pretty noticeable performance hit.

              2. 13

                One of the things that I like having a CI is the fact that it forces you to declare your dev environment programmatically. It means that you avoid the famous “works in my machine” issue because if tests works in your machine but not in CI, something is missing.

                There are of course ways to avoid this issue, maybe if they enforced that all dev tests also run in a controlled environment (either via Docker or maybe something like testcontainers), but it needs more discipline.

                1. 2

                  This is by far the biggest plus side to CI. Missing external dependencies have bitten me before, but without CI, they’d bite me during deploy, rather than as a failed CI run. I’ve also run into issues specifically with native dependencies on Node, where it’d fetch the correct native dependency on my local machine, but fail to fetch it on CI, which likely means it would’ve failed in prod.

                2. 4

                  Here’s one: if you forget to check in a file, this won’t catch it.

                  1. 3

                    It checks if the repo is not dirty, so it shouldn’t.

                    1. 1

                      This is something “local CI” can check for. I’ve wanted this, so I added it to my build server tool (that normally runs on a remote machine) called ding. I’ll run something like “ding build make build” where “ding build” is the ci command, and “make build” is what it runs. It clones the current git repo into a temporary directory, and runs the command “make build” in it, sandboxed with bubblewrap.

                      The point still stands that you can forget to run the local CI.

                    2. 1

                      What’s to stop me from lying and making the gh api calls manually?

                    3. 3

                      With all these constraints, I’d just pay Nabu Casa the $6.50/month to provide external access to my local instance.

                      1. 1

                        I thought “why buy a nabu casa subscription, if I already have a server with a certificate at home”. But yes…

                        1. 3

                          But what does the Nabu Casa proxy add security-wise that your own local one doesn’t? Authentication is still done by your local Home Assistant, all their cloud bit does is look at the SNI of an incoming connection and send it to the right HA instance based on this (which holds a local LE certificate for the host name). If I can attack your local HA through HTTP requests, I can do the same through Nabu Casa.

                          EDIT: as a minor thing, Nabu Casa allows you to remotely toggle access on and off, but that wouldn’t be too hard to do in a DIY setup too (and could even do more advanced things, e.g. limiting which IPs can access)

                      2. 11

                        What I took from this article is that this is yet anither bug which Sentry would have notified them of withing minutes. Surely the multiple insertion across a uniqueness constraint raised an exception, they just didn’t have proper logging and tracing.

                        1. 4

                          I was thinking a very similar thought. It seems odd that they didn’t have any logging around this or retry logic to properly regenerate the id on a collision.

                        2. 8

                          There is a collection called bsd-games of which I play the linux ports. https://wiki.linuxquestions.org/wiki/BSD_games

                          I’ve not played them all but I really got addicted to atc, which is an air traffic control game.

                          1. 4

                            atc is definitely a good time waster. Also, not a game per-se, but I always found rain charming. I even rewrote it to have colors.

                            1. 1

                              Love these. Used to play hunt on my colleges server with friends :)

                              1. 1

                                Yes! I love bdsgames. Lots of good choices and widely available.

                              2. 17
                                1. 3

                                  bat

                                  Which bat do you mean? When I see that I think of the old, much beloved, Windows email client.

                                  1. 5

                                    Oh, I used that one! 🥹

                                    Given the context, I assume the author meant the bat without the and bang: https://github.com/sharkdp/bat

                                    1. 4

                                      That is indeed the one! It’s a replacement for cat

                                  2. 2

                                    I’m with you there! those are usually the first utilities I install in any new system I use

                                  3. 2

                                    Wow, that was a swift response after the article describing the issues.

                                    1. 1

                                      What’s it licensed as? :D

                                      1. 2

                                        I think MIT but a LICENSE file would be nice.

                                        1. 1

                                          I don’t know much about licenses so if you have any suggested reading, im open to it. I did put MIT in the package.json but not because of any deep understanding of the differences. I would like it open as possible.

                                          1. 3

                                            2 clause BSD or MIT seem to be the permissive licenses of choice, unless you’re a company in which case Apache 2. IANAL, I personally prefer copylefts but you do you.

                                            1. 3

                                              I’m a fan of MPL-2.0 because it requires open sourcing changes (per file) but allows the file to be bundled/minified and to be hosted. It works very practically for front-end while not putting too much burden on the maintainer to merge in all commits right away–but like copyleft requires changes to open for others to read, learn from, and use.

                                            2. 2

                                              Obscurity precludes common antifeatures from software builds. For example, Firefox’s EME DRM module does not exist at all in the ppc64le package.

                                              Does that mean that one can’t stream from a service like Netflix or Hulu using this motherboard?

                                              1. 14

                                                Yes. It means that Netflix and Hulu prevent you from streaming on anything other than Windows, OSX, and mainstream Linux on mainstream hardware. Even chosing to use an insufficiently popular libc means you can’t load the DRM module.

                                                1. 2

                                                  Yes. It means that Netflix and Hulu prevent you from streaming on anything other than Windows, OSX, and mainstream Linux on mainstream hardware.

                                                  … which is an utterly predictable outcome that was pointed out to the W3C while they were standardising EME. To no avail, sadly.

                                                  To make it super, super, clear: those Netflix and Hulu Web experiences are completely W3C standards compliant.

                                                  1. 1

                                                    Though they do bless some embedded solutions, like the roku boxes (which are linux arm boxes). You also can’t get full resolution except on such a box, presumably because of the stronger display snooping protections those boxes provide.

                                                2. 6

                                                  I’m not sure when I would use this but I love it when people use languages in ways the authors hadn’t intended.

                                                  1. 16

                                                    Regardless of the practicality of this approach, this exercise found many bugs in the new Go generics implementation (most of which are now fixed) and opened several questions into their semantics.

                                                    1. 2

                                                      … this exercise found many bugs in the new Go generics implementation (most of which are now fixed) and opened several questions into their semantics.

                                                      Can you provide a link to the related discussion?

                                                    2. 4

                                                      “Here is an extremely convoluted way to make C kinda look sorta like C++ but without the additional type safety.”

                                                      “What’s the advantage?”

                                                      “It’s not C++.”

                                                      “SOLD!”

                                                    3. 3

                                                      Recovering from a corrupted filesystem that ate all my files (yeah, all of them, except for some empty root directories: /media, /proc, etc) and half of my life.

                                                      And no, my backups were not up to date.

                                                      This time I will keep my backups up to date. This time I will keep my backups up to date. This time I will keep my backups up to date.

                                                      No, I really will this time. I swear.

                                                      1. 5

                                                        I’m gonna take a nubbin out of Amazon culture that I’ve found super useful in cases like this: Rely on mechanisms. Not best intentions.

                                                        Telling yourself that if your backup strategy requires discipline and manual intervention is a recipe for self disappointment and failure. Make it effortless and you win big today and into the future.

                                                        There’s something to be said here for having some kind of central file store (I use a NAS, YMMV) and backing that up to at least 2 additional places.

                                                        My NAS backs up to a local USB disk as well as to Baclblaze, and I’ve made actually using it for my regular work trivial with things like using autofs on the Linux side (It’s easier than you think to set up, but in true Linux fashion you’d never know that from the docs. Blog post incoming) and on the Mac/Windows side the very first thing I do is set up a persistent fileshare and do all important work there.

                                                        It’s all work for sure but it’s a one time investment that’s allowed me to make continual iterative forward progress rather than having to start from scratch every few months when my laptop gets blown away :)

                                                        1. 3

                                                          Inspiring. My backup setup is pretty good, using borg to back up several computers onto a dedicated drive on my home server/NAS, but… Maybe what I should do with my weekend is get notifications for failed backups working properly and automate restore tests.

                                                          1. 3

                                                            I’ve found borgmatic to be great for automating my borg backups and sending me notifications when there are problems.

                                                            1. 2

                                                              You’re already well ahead of the game from most people :) I should look into borg.

                                                              I’m using a Synology NAS at this point because when I bought it ~3 years ago I wasn’t confident enough in my own skills to be sure that I could manage a server of my own without losing data.

                                                              Were I to do it again today I’d definitely look REALLY hard at using FreeNAS or UnRAID or … Something :)

                                                            2. 2

                                                              My NAS backs up to a local USB disk as well as to Baclblaze

                                                              Backblaze is so simple to setup and forget about that I think it should be the first step in most anyone’s back-up strategy.

                                                            3. 1

                                                              I have a folder on my desktop. I have the same folder on my laptop. And my NAS. I run backups from my NAS only.

                                                              Syncthing is really nice like that.

                                                            4. 1

                                                              I too can look up words in a dictionary:

                                                              main: Most important; principal

                                                              A branch has nothing to do with decisions or dialogue.

                                                              1. 11

                                                                Updates causing a reboot

                                                                When this happens though, it’s not only a reboot. It’s then waiting forever for the updates to install. It’s not an exaggeration to say I’ve seen my partner’s laptop sit there for an hour “installing updates”.

                                                                1. 1

                                                                  A slightly altered database schema could handle this through a check constraint by storing the number of symptoms.

                                                                  CREATE TABLE arrivals (
                                                                    id INTEGER NOT NULL PRIMARY KEY,
                                                                    has_symptoms BOOL NOT NULL DEFAULT FALSE,
                                                                    num_symptoms INT NOT NULL DEFAULT 0,
                                                                    pcr_test_result BOOL NOT NULL DEFAULT FALSE,
                                                                    CHECK ((has_symptoms AND num_symptoms > 0) OR (NOT has_symptoms AND num_symptoms=0))
                                                                  );
                                                                  
                                                                  INSERT INTO arrivals (id, has_symptoms, num_symptoms, pcr_test_result) VALUES (123456, true, 2, true);
                                                                  INSERT INTO arrival_symptoms VALUES (123456, 1);
                                                                  INSERT INTO arrivals (id, pcr_test_result, num_symptoms) VALUES (123458, false, 5);
                                                                  -- error: new row for relation "arrivals" violates check constraint "arrivals_check"
                                                                  INSERT INTO arrivals (id, pcr_test_result, has_symptoms) VALUES (123458, true, 0);
                                                                  -- error: new row for relation "arrivals" violates check constraint "arrivals_check"
                                                                  
                                                                  1. 1

                                                                    But wouldn’t it create another potential for discrepancy between arrivals.num_symptoms and the number of rows in arrival_symptoms?

                                                                  2. 9

                                                                    It looks like the complaint here is that the author doesn’t really grok async processes, so they demand sync processes via threads. FYI you can use worker threads in NodeJS that use a different thread.

                                                                    I’m not sure what the data structures non sequitur was about, you can write any data structures you need in JS if they aren’t already in the language.

                                                                    This article is all about personal preference, though. The author can’t remember the Promise API, but in the context of the post, it seems to mean they can’t remember how to write client.query(/*..*/).then() instead of using await client.query. Is it that abstracted for you or did you just never really use promises to begin with?

                                                                    I’ve been with JavaScript for a long time (since roughly 1998) and I remember what it was like when it was pretty much something you used to add a little jazz hands action to your site via DHTML. The evolution of JS (which is criticized by the author) is due to millions of eyes being on the language and adding improvements over time. JS naturally came out of an async ecosystem (the browser), so Node followed the same idea. Callbacks were a hassle to deal with, so we got to Promises. Promise syntax is a big unwieldy, so we switched to async/await. If someone comes up with an easier way to do it, they will. You can still write fully blocking functions if you want. You can also avoid your “red/blue” function thing by using Promises without signifying a function with async. Just use the old syntax.

                                                                    I don’t primarily develop in Node, but I see a lot of misdirected anger or hate on the language and ecosystem because people just don’t understand how to use it. It’s different than a lot of stuff for sure, but it is a nice tool to have.

                                                                    1. 4

                                                                      Thank you! I’ve been writing JS since 1999 so I definitely relate to the DHTML days. For the last 8-10 years I’ve been writing Ruby professionally, and switched to JS (well, mostly TypeScript) just last year when I changed jobs. Gotta say, there’s been a ton of work done around developer ergonomics and making the language a bit less unwieldy.

                                                                      1. 4

                                                                        I’m not sure what the data structures non sequitur was about

                                                                        I thought it tied in quite nicely to the part about Erlang and Elixir. Erlang was designed with good data structures around concurrency but Node’s data structures have been strained since callbacks and Promises are building on top of that abstraction.

                                                                      2. 4

                                                                        This problem has bitten my butt so many times that I finally added an alias in my .gitconfig to track the remote branch.

                                                                        1. 4

                                                                          I’m used to git checkout -b [new branch] , which on git push errors and suggests the git push -u origin [new branch] mentioned in the blog post. I might be misreading, but it looks like this sidesteps the upstream issues mentioned.

                                                                          If I were to try to update a feature branch from master/main, I’d reach for git merge first, but if there aren’t conflicts, why bother?

                                                                          1. 2

                                                                            I’ve never seen this issue, and I’m having a hard time following along. I might retrace the steps to understand better.

                                                                          2. 3

                                                                            One of the things this video explains is how to configure less as your pager. The next thing I would suggest trying is setting up delta as your Git pager. It adds syntax highlighting to your diffs, then passes the formatted diff on to your $PAGER. You’re still using less, but any diffs you’re looking at are a little more readable.

                                                                            You may want to change Delta’s configuration. I use --color-only. I find that without that flag, Delta’s reformatted filenames blend in too much with code around them.

                                                                            1. 1

                                                                              Thanks for the suggestion! That’s a handy tool.

                                                                            2. 3

                                                                              I find it a little frustrating that they renamed the functions too.

                                                                              In the given example, if they had moved the function TempDir into the os module, the only change that would be needed would be deleting the io/ioutil import. But they also renamed the function so the code needs to change as well.

                                                                              1. 9

                                                                                os.TempDir() already exists and is used to get the system’s temp directory (i.e. ${TMPDIR:-/tmp} on most Unix systems).

                                                                                ioutil.TempDir() will keep working; you don’t need to update any of your code, and I believe go fix should make all the required changes anyway.

                                                                              2. 1

                                                                                I don’t really see how this would cause any problems. I’m pretty sure twitter just follows the link once to generate a preview. It wouldn’t loop because it doesn’t follow any links on the previewed page.

                                                                                1. 1

                                                                                  But to generate the preview, it must render itself which requires it to generate a preview.

                                                                                  1. 1

                                                                                    The preview is generated asynchronously. It’s going to either get a 404, or the page without a preview.

                                                                                2. 3

                                                                                  Small shell scripts are your friend, something like:

                                                                                  cat << 'EOF' > /usr/local/bin/publicise-recording
                                                                                  #!/bin/sh
                                                                                  cd /var/recordings
                                                                                  mv "$@" public/
                                                                                  cd -
                                                                                  EOF
                                                                                  

                                                                                  Then, hit their hand with a ruler when they use mv, rather than the script.

                                                                                  You may have grand plans for VOD and making recordings public by default, but those plans may not come to fruition for another year, if ever, but if you spend 5 minutes putting up guards around dangerous manual processes, you won’t have to spend hours grepping through binary files…

                                                                                  1. 5

                                                                                    I think that last cd is superfluous

                                                                                    1. 1

                                                                                      Nah, it would move you back to the directory you started in.

                                                                                      1. 9

                                                                                        It won’t do either. It operates in the context of the shell process running the script, which is a sub process of the shell you invoke it from. That shell’s CWD will be unaffected.

                                                                                      2. 1

                                                                                        Yeah, that was just a nice thing to put them back where they were, if they were in a different directory

                                                                                        1. 3

                                                                                          That would only matter if the script was sourced though.

                                                                                    2. 5

                                                                                      As cool as this is, kind of a misleading title:

                                                                                      The first invocation of the script will be slower as the script is compiled