The main argument against is that even if you assume good intentions, it won’t be as close to production as an hosted CI (e.g. database version, OS type and version, etc).
Lots of developers develop on macOS and deploy on Linux, and there’s tons of subtle difference between the two systems, such as case sensitivity of the filesystem, as well as default ordering just to give an example.
To me the point of CI isn’t to ensure devs ran the test suite before merging. It’s to provide an environment that will catch as many things as possible that a local run wouldn’t be able to catch.
To me the point of CI isn’t to ensure devs ran the test suite before merging.
I’m basically repeating my other comment but I’m amped up about how much I dislike this idea, probably because it would tank my productivity, and this was too good as example to pass up: the point of CI isn’t (just) to ensure I ran the test suite before merging - although that’s part of it, because what if I forgot? The bigger point, though, is to run the test suite so that I don’t have to.
I have a very, very low threshold for what’s acceptably fast for a test suite. Probably 5-10 seconds or less. If it’s slower than that, I’m simply not going to run the entire thing locally, basically ever. I’m gonna run the tests I care about, and then I’m going to push my changes and let CI either trigger auto-merge, or tell me if there’s other tests I should have cared about (oops!). In the meantime, I’m fully context switched away not even thinking about that PR, because the work is being done for me.
You’re definitely correct here but I think there are plenty of applications where you can like… just trust the intersection between app and os/arch is gonna work.
But now that I think about it, this is such a GH-bound project and like… any such app small enough in scope or value for this to be worth using can just use the free Actions minutes. Doubt they’d go over.
any such app small enough in scope or value for this to be worth using can just use the free Actions minutes.
Yes, that’s the biggest thing that doesn’t make sense to me.
I get the argument that hosted runners are quite weak compared to many developer machines, but if your test suite is small enough to be ran on a single machine, it can probably run about as fast if you parallelize your CI just a tiny bit.
With a fully containerized dev environment yes, that pretty much abolish the divergence in software configuration.
But there are more concern than just that. Does your app relies on some caches? Dependencies?
Where they in a clean state?
I know it’s a bit of an extreme example, but I spend a lot of time using bundle open and editing my gems to debug stuff, it’s not rare I forget to gem pristine after an investigation.
This can lead me to have tests that pass on my machine, and will never work elsewhere. There are millions of scenarios like this one.
I was once rejected from a job (partly) because the Dockerfile I wrote for my code assignment didn’t build on the assessor’s Apple Silicon Mac. I had developed and tested on my x86-64 Linux device. Considering how much server software is built with the same pair of configurations just with the roles switched around, I’d say they aren’t diminished enough.
Was just about to point this out. I’ve seen a lot of bugs in aarch64 Linux software that don’t exist in x86-64 Linux software. You can run a container built for a non-native architecture through Docker’s compatibility layer, but it’s a pretty noticeable performance hit.
One of the things that I like having a CI is the fact that it forces you to declare your dev environment programmatically. It means that you avoid the famous “works in my machine” issue because if tests works in your machine but not in CI, something is missing.
There are of course ways to avoid this issue, maybe if they enforced that all dev tests also run in a controlled environment (either via Docker or maybe something like testcontainers), but it needs more discipline.
This is by far the biggest plus side to CI. Missing external dependencies have bitten me before, but without CI, they’d bite me during deploy, rather than as a failed CI run. I’ve also run into issues specifically with native dependencies on Node, where it’d fetch the correct native dependency on my local machine, but fail to fetch it on CI, which likely means it would’ve failed in prod.
This is something “local CI” can check for. I’ve wanted this, so I added it to my build server tool (that normally runs on a remote machine) called ding. I’ll run something like “ding build make build” where “ding build” is the ci command, and “make build” is what it runs. It clones the current git repo into a temporary directory, and runs the command “make build” in it, sandboxed with bubblewrap.
The point still stands that you can forget to run the local CI.
Sebastian Lague is such an impressive code educator. He can make even mundane topics fascinating, between his style of explanation, and his general always-curious approach, as well as taking you through all the rabbit holes and issues he ran into along the way.
If you haven’t checked out his other videos, I strongly recommend checking out his channel, and just picking a video that seems even marginally interesting. His videos also make a strong case for games as a powerful teaching tool.
Not too closely related, but this is also one reason why I think that it’s important to review your own pull requests.
Depending on the size of the work things can get lost that will only resurface once you do the review (or play around with the feature again).
This is a great trigger to give things another go and do some adjustments. I think the most important aspect of this is, that this helps with respecting the time of others, since you might catch small things before requesting a review from someone else.
Self review is important! I’ve definitely found many issues that were quick and easy to resolve just by doing one last pass on my work, both through code review, and just using what I built.
Exactly. I started doing this just to avoid embarrassing typos etc. before I send my work over to someone else. But over the years, a final review of my own work, in the interface where I would normally review other people’s code, has proven to be very valuable. It:
Avoids a back and forth on those stupid mistakes like typos
Checks that the change is consistent throughout. Sometimes I slightly change my solution while working on it and I forget to update the parts that are already done to incorporate these new insights.
Gives me a sense of fulfilment. Taking a moment to realize: yes, this is what I wanted to accomplish. And now It is done.
I usually do a pass to add comments before a raise a PR. For me, it’s the sweet spot: I still remember how the code worked when I stop to think, but I’ve got just enough distance to understand what might be hard to understand. As I add comments, I often find I write things like ‘this works because of this invariant’ and then realise that there’s a corner case where that invariant may not hold and go and fix the code, or write an explanation of some complex code and realise that the explanation leads to a much simpler implementation.
I’ve lately toyed with including a “suggested reading order” in some larger MRs, particularly ones where I notice that just starting from the top of the diff vs. base would throw you into the “middle” of the change. Feedback has been positive!
At $OLD_JOB I made a point to always read my CR (what they called PRs) with “fresh” eyes before publishing it to anyone else. It’s amazing the things you catch when viewing the same content in a different context and UI!
I advocate including a somewhat detailed test plan in the PR description. Not because I expect the reviewer to follow it to the letter, but because in constructing it the author almost always discovers things that don’t seem quite right.
Apple isn’t acknowledging any wrongdoing in the settlement…
That said, I don’t think I’ve seen a settlement where either party has admitted wrongdoing. Generally because those settlements could be used as evidence in future cases if they did.
Lua is a language I’d recommend anyone check out, and I’m glad to see an article showcasing some of its coolness. That said, I don’t know if I’d recommend it for anyone’s primary programming language. (Not to say that it shouldn’t be!) I’ve used it for configuring WezTerm, as well as some fun coding adventures in Minecraft through ComputerCraft: Tweaked and similar mods. At one point, I even was working on a Twitch chat display for fun.
It’s got a lot of power, and its proliferation definitely comes from its ease of embedding. Though it’s a language that, to me, always feels foreign. Coming from mostly JS/TS, and everything feels just a little “off” when working with it, but the language is one that’s probably easiest for me to hold in my mind while I’m using it. And there are some projects like Moonscript that transpile to Lua that can make it feel a bit more familiar.
I feel like this article doesn’t explain why Rails is better “low code” than low code. What makes Rails better than any other programming language? It mentions scaffolds and turbo and authgen; I would have liked more details on how these work and speed up the process.
I’m not a Rails developer, so feel free to take this with a grain of salt, but I can try and elaborate a bit on those questions.
Scaffolds will generate the basic code necessary for a new component. Rails follows the Model/View/Controller architecture, so the model defines the shape of the data, views handle rendering that data, and controllers handle interacting with and updating that data. More info
Turbo is part of a broader framework that allows for partial page updates via HTML rather than using a full client-side framework + JS, breaks parts of individual pages into smaller components, among other things. I haven’t used it myself, so I can’t really properly praise or criticize it.
Authgen is essentially a scaffolding tool for authentication specifically. This article provides some more detail on what exactly it does behind the scenes, but it gives you a full authentication system with user management, password resets, and sessions.
In general, I’d say Rails speeds up the process by providing robust tooling to automatically generate the necessary code for things to just work.
I’d also say this article is probably also more focused on a criticism of how restrictive low-code tools are when you start to run up against the walls of what they allow you to do, and that by starting with a framework that handles a lot of the drudgery for you, it can be a better “low-code” tool than a low-code tool. Django, Rails, Laravel, etc, it doesn’t matter a whole lot, though Rails in general probably has some of the most robust tools for automatically generating dang near anything you’d need in a basic app, while still allowing you to have complete control with code where needed.
I feel like any MVC framework with sensible defaults, scaffolding, well-known libraries, and code-first database models has the same benefits the author is talking about, Rails is just the one he uses. I feel like everything in the article also applies to Django (Python) or ASP.NET MVC + Entity Framework (C#).
this discounts just how productive rails is for a solo developer. it’s called the solo founder framework for a very good reason. it’s insanely productive.
Last time I was doing real web work for money was about a decade ago and I haven’t kept up with that world much. Most of my projects were in Django but I had come across a Ruby library that looked like a very good fit for a client project and decided to take a stab at it with Rails. I explained to my client that I was going to try some different tech for the project, was going to time box the experiment to 40h and if at the end of the week I didn’t like how it was shaping up I would eat the 40h and do it in Django instead. Overall estimate for the project was around 120-150h.
By the end of the week I had a fully functional prototype with just a couple of loose ends to clean up. Overall it came out to about 60h. I was shocked at how productive it was.
It all comes down to escape hatches. Low-code tools have to have their escape hatches welded on, and hope they don’t interfere with anything else in how the tool expects things to work, whereas Rails is just an escape hatch ready to be used, because it’s part of the foundation.
Very cool! This is reminiscent of Interial Reference Systems, where it uses heading information, velocity, and other information to determine a flight’s relative position to a fixed starting point. Mentour Pilot has an interesting video on IRS/ADIRUs and other backup navigation systems in aviation.
I was hoping the article would end with a comparison of the flight’s track from flightradar / ads-b data and the calculated track from this… I wonder how accurate it is
My family was homeless when I was born. But my parents found work, the council found us a flat, and 20 years later my dad was the managing director of a very large engineering firm, and my parents built a fabulous home.
To cut a long story short, much later my parents had a few personal problems, and sadly my mum finally killed herself. I don’t think you ever get over it, you really just learn to live with it.
My life then went a bit pear shaped. I trusted bad people and guess what, really bad things happened. Very kind friends managed to put me back on my feet, and then, at 55, I met Judy and her family, and we’ve had the most wonderful 15 years together.
So please, what ever happens, please don’t give up.
I think that short passage was the part of the article I spent the longest on. And for a relatively short, simple website, there’s a lot of character and heart to the design and the presentation of it.
Thank you for sharing. I had no idea of the existence of the includeIf directive. Shamefully, I had resorted to just having no global user.name or user.email and setting it on a repo by repo basis. For me, just detecting by directory is plenty, as all of my work related code is within a directory of my companies name.
I saw someone over on HN who used includeIf to differentiate based on SSH identities, presumably by using a SSH Host entry, so both the SSH and Git identities are tied together.
I’ve been using Cursor at work since they pay for a license, and it’s reinforced to me that I don’t think I’d ever use it outside of that.
A) I don’t really want to give money to the current crop of AI companies, between the moral and environmental costs,
B) It’s unhelpful at least as often as it’s helpful, so even when it does give me something correct, it takes me about as long to verify its correctness as if I wrote it myself in the first place, and
C) I enjoy the exercise of doing even the mundane things. It’s working those mental muscles to keep myself sharp, as well as just enjoying the zen of programming. The same thing that drew me to this line of work in the first place.
Now I’m interested in seeing what happens if you try to open that menu using, e.g. Vimium. Besides the omnipresent likelihood that you can’t even get a link hint for the menu item, what behavior do you get? Keyboard, mouse, none at all, or something else?
There’s also some interesting caveats with assistive technology like JAWS/NVDA, where the order of events might be different, or different events might be fired as linked in this post from the follow up post that caius shared in this thread (albeit the author has confirmed that at least some of these have changed). I imagine some of the quirks would probably be similarly present when using Vimium? It’d be interesting to test.
Getting feedback directly from the author of the PointerEvents spec is very cool, even if the web has changed underneath it. And it just goes to show how important reading a spec can be, specifically the unambiguous definition of pointerId vs pointerType.
There’s some really cool insights on software testing in here. The article it links to on tautological tests is a great read on its own, and a good reminder that it’s easy to fall into the trap of testing the libraries you’re relying on, rather than your own code if you’re just trying to write tests/hit test coverage.
Package managers really should have some sort of level of trust associated with packages. Not inferred trust from arbitrary metrics like number of downloads, more like actual vetting from trusted sources. A nice idea might be to have a number next to a package of all the sources that officially trust the package, and you can click the number to see the list of sources - be they websites, approval from notable figures in the community attached to the package’s profile itself, whatever. A particular source can be personally un-trusted, and people could even kick around custom lists of trusted and untrusted sources/packages, kind of like an adblock rule list.
Of course, we’ll never actually have something that nice. That would simply be too convenient.
Even still, trusted packages can be hijacked. Malware on the dev’s machine, supply chain attacks on upstream packages, malicious contributors playing the long game à la Jia Tan, or a package changing hands via purchase/a maintainer stepping down due to lack of time or burnout.
Cryptographic signatures could tamp down on some of these, but having endorsements could give a false sense of security, or introduce complacency. It’s a hard balance, since all it takes is one moment where your vigilance isn’t as high as it should be, and you could be infected with malware.
A sophisticated targeted attack can hijack the repository of a trusted package. A burglar can break into my house and steal all my money. A superpower can decide they don’t like living things anymore and launch an ICBM somewhere important. None of these are excuses as to why we can’t have a more rigorous process of gaining user’s trust or separating known packages from unknown ones, instead of simply letting them all float in the same pool of indistinguishable packages where the only difference is the name and the number of downloads. Because right now, that is what users are using as a trust system, people are already using a trust system, and it’s far less reliable than what an actual, intentional system of trust reified into the package manager itself would most like be.
It’s interesting to see a platform basically turned into a Quine as a Service. Some very cool info on Deno’s modular permissions too. Going through the deployed handlers, there’s some interesting examples that have tried to escape the sandbox it looks like.
I may be in the minority, but I can definitely notice when an app takes more than one frame (definitely less than 100ms!) to respond to my clicks—though most apps I use at work don’t do much animation (Atlassian sticks out like a sore thumb, though I don’t think even adding animations can save them…)
I do feel like even when an app does have animations, that 100ms is noticable. Navigating menus in Google Calendar vs Telegram Desktop is like a night and day difference.
This makes me wonder what the psychological effect of animations on our perception of sluggishness is. I bet there are studies about this, I’m just too lazy to look for them ^^’
It’s definitely a scale. There are some apps where I find myself frustrated with the animations before it even finishes. Often times I just want to do something, and if it feels like it’s trying to make a whole presentation out of it, it irks me more than if it were actually just sluggish, especially if it’s in response to something like a button click. I should probably catalogue some of the worst offenders at some point, just so I have a frame of reference. Though I think slide-in/expanding sidebars that reshape the main content are probably some of my least favorite ones to see.
I think the apps and sites that use animation the best are some where you don’t even necessarily notice them.
This! I just created a template language in a Rust project of mine. After I stopped, I looked and realized “oh, this is just Handlebars, but bad. Nice!”
If you’ve got the drive to make your own version of {{tech_name}}, do it. Even if it’s not as good, there’s so much value in understanding how something works. Plus, there’s no worry about versioning moving out from under you, about some breaking change you weren’t expecting.
One domain on Name.com simply because they offer .im domains.
Email:
Zoho
Servers:
WholesaleInternet for dedicated servers
Hetzner, previously
Vultr
DigitalOcean
Other Services:
BunnyCDN/Bunny.net
I also have a Zimablade sitting by my router hosting my HomeAssistant, Jellyfin, UptimeKuma and a few miscellaneous services. At some point I really should consolidate, but I ended up spreading out to different services that best fit my usecase at that particular time. Hetzner I liked, but they didn’t have any US options when I was using them, and the latency for game servers was untenable in the US.
I think this is a great idea, but I am anticipating folks explainIng why it isn’t.
The main argument against is that even if you assume good intentions, it won’t be as close to production as an hosted CI (e.g. database version, OS type and version, etc).
Lots of developers develop on macOS and deploy on Linux, and there’s tons of subtle difference between the two systems, such as case sensitivity of the filesystem, as well as default ordering just to give an example.
To me the point of CI isn’t to ensure devs ran the test suite before merging. It’s to provide an environment that will catch as many things as possible that a local run wouldn’t be able to catch.
I’m basically repeating my other comment but I’m amped up about how much I dislike this idea, probably because it would tank my productivity, and this was too good as example to pass up: the point of CI isn’t (just) to ensure I ran the test suite before merging - although that’s part of it, because what if I forgot? The bigger point, though, is to run the test suite so that I don’t have to.
I have a very, very low threshold for what’s acceptably fast for a test suite. Probably 5-10 seconds or less. If it’s slower than that, I’m simply not going to run the entire thing locally, basically ever. I’m gonna run the tests I care about, and then I’m going to push my changes and let CI either trigger auto-merge, or tell me if there’s other tests I should have cared about (oops!). In the meantime, I’m fully context switched away not even thinking about that PR, because the work is being done for me.
You’re definitely correct here but I think there are plenty of applications where you can like… just trust the intersection between app and os/arch is gonna work.
But now that I think about it, this is such a GH-bound project and like… any such app small enough in scope or value for this to be worth using can just use the free Actions minutes. Doubt they’d go over.
Yes, that’s the biggest thing that doesn’t make sense to me.
I get the argument that hosted runners are quite weak compared to many developer machines, but if your test suite is small enough to be ran on a single machine, it can probably run about as fast if you parallelize your CI just a tiny bit.
I wonder if those differences are diminished if everything runs on Docker
With a fully containerized dev environment yes, that pretty much abolish the divergence in software configuration.
But there are more concern than just that. Does your app relies on some caches? Dependencies?
Where they in a clean state?
I know it’s a bit of an extreme example, but I spend a lot of time using
bundle openand editing my gems to debug stuff, it’s not rare I forget togem pristineafter an investigation.This can lead me to have tests that pass on my machine, and will never work elsewhere. There are millions of scenarios like this one.
I was once rejected from a job (partly) because the Dockerfile I wrote for my code assignment didn’t build on the assessor’s Apple Silicon Mac. I had developed and tested on my x86-64 Linux device. Considering how much server software is built with the same pair of configurations just with the roles switched around, I’d say they aren’t diminished enough.
Was just about to point this out. I’ve seen a lot of bugs in aarch64 Linux software that don’t exist in x86-64 Linux software. You can run a container built for a non-native architecture through Docker’s compatibility layer, but it’s a pretty noticeable performance hit.
One of the things that I like having a CI is the fact that it forces you to declare your dev environment programmatically. It means that you avoid the famous “works in my machine” issue because if tests works in your machine but not in CI, something is missing.
There are of course ways to avoid this issue, maybe if they enforced that all dev tests also run in a controlled environment (either via Docker or maybe something like testcontainers), but it needs more discipline.
This is by far the biggest plus side to CI. Missing external dependencies have bitten me before, but without CI, they’d bite me during deploy, rather than as a failed CI run. I’ve also run into issues specifically with native dependencies on Node, where it’d fetch the correct native dependency on my local machine, but fail to fetch it on CI, which likely means it would’ve failed in prod.
Here’s one: if you forget to check in a file, this won’t catch it.
It checks if the repo is not dirty, so it shouldn’t.
This is something “local CI” can check for. I’ve wanted this, so I added it to my build server tool (that normally runs on a remote machine) called ding. I’ll run something like “ding build make build” where “ding build” is the ci command, and “make build” is what it runs. It clones the current git repo into a temporary directory, and runs the command “make build” in it, sandboxed with bubblewrap.
The point still stands that you can forget to run the local CI.
What’s to stop me from lying and making the gh api calls manually?
Sebastian Lague is such an impressive code educator. He can make even mundane topics fascinating, between his style of explanation, and his general always-curious approach, as well as taking you through all the rabbit holes and issues he ran into along the way.
If you haven’t checked out his other videos, I strongly recommend checking out his channel, and just picking a video that seems even marginally interesting. His videos also make a strong case for games as a powerful teaching tool.
Not too closely related, but this is also one reason why I think that it’s important to review your own pull requests.
Depending on the size of the work things can get lost that will only resurface once you do the review (or play around with the feature again).
This is a great trigger to give things another go and do some adjustments. I think the most important aspect of this is, that this helps with respecting the time of others, since you might catch small things before requesting a review from someone else.
Self review is important! I’ve definitely found many issues that were quick and easy to resolve just by doing one last pass on my work, both through code review, and just using what I built.
Exactly. I started doing this just to avoid embarrassing typos etc. before I send my work over to someone else. But over the years, a final review of my own work, in the interface where I would normally review other people’s code, has proven to be very valuable. It:
I usually do a pass to add comments before a raise a PR. For me, it’s the sweet spot: I still remember how the code worked when I stop to think, but I’ve got just enough distance to understand what might be hard to understand. As I add comments, I often find I write things like ‘this works because of this invariant’ and then realise that there’s a corner case where that invariant may not hold and go and fix the code, or write an explanation of some complex code and realise that the explanation leads to a much simpler implementation.
I’ve lately toyed with including a “suggested reading order” in some larger MRs, particularly ones where I notice that just starting from the top of the diff vs. base would throw you into the “middle” of the change. Feedback has been positive!
At
$OLD_JOBI made a point to always read my CR (what they called PRs) with “fresh” eyes before publishing it to anyone else. It’s amazing the things you catch when viewing the same content in a different context and UI!I advocate including a somewhat detailed test plan in the PR description. Not because I expect the reviewer to follow it to the letter, but because in constructing it the author almost always discovers things that don’t seem quite right.
I always read my patches after I send them. This ensures a v2 when I inevitably notice a dumb typo.
Does this prove guilt though?
That said, I don’t think I’ve seen a settlement where either party has admitted wrongdoing. Generally because those settlements could be used as evidence in future cases if they did.
Lua is a language I’d recommend anyone check out, and I’m glad to see an article showcasing some of its coolness. That said, I don’t know if I’d recommend it for anyone’s primary programming language. (Not to say that it shouldn’t be!) I’ve used it for configuring WezTerm, as well as some fun coding adventures in Minecraft through ComputerCraft: Tweaked and similar mods. At one point, I even was working on a Twitch chat display for fun.
It’s got a lot of power, and its proliferation definitely comes from its ease of embedding. Though it’s a language that, to me, always feels foreign. Coming from mostly JS/TS, and everything feels just a little “off” when working with it, but the language is one that’s probably easiest for me to hold in my mind while I’m using it. And there are some projects like Moonscript that transpile to Lua that can make it feel a bit more familiar.
I feel like this article doesn’t explain why Rails is better “low code” than low code. What makes Rails better than any other programming language? It mentions scaffolds and turbo and authgen; I would have liked more details on how these work and speed up the process.
I’m not a Rails developer, so feel free to take this with a grain of salt, but I can try and elaborate a bit on those questions.
In general, I’d say Rails speeds up the process by providing robust tooling to automatically generate the necessary code for things to just work.
I’d also say this article is probably also more focused on a criticism of how restrictive low-code tools are when you start to run up against the walls of what they allow you to do, and that by starting with a framework that handles a lot of the drudgery for you, it can be a better “low-code” tool than a low-code tool. Django, Rails, Laravel, etc, it doesn’t matter a whole lot, though Rails in general probably has some of the most robust tools for automatically generating dang near anything you’d need in a basic app, while still allowing you to have complete control with code where needed.
I feel like any MVC framework with sensible defaults, scaffolding, well-known libraries, and code-first database models has the same benefits the author is talking about, Rails is just the one he uses. I feel like everything in the article also applies to Django (Python) or ASP.NET MVC + Entity Framework (C#).
this discounts just how productive rails is for a solo developer. it’s called the solo founder framework for a very good reason. it’s insanely productive.
Last time I was doing real web work for money was about a decade ago and I haven’t kept up with that world much. Most of my projects were in Django but I had come across a Ruby library that looked like a very good fit for a client project and decided to take a stab at it with Rails. I explained to my client that I was going to try some different tech for the project, was going to time box the experiment to 40h and if at the end of the week I didn’t like how it was shaping up I would eat the 40h and do it in Django instead. Overall estimate for the project was around 120-150h.
By the end of the week I had a fully functional prototype with just a couple of loose ends to clean up. Overall it came out to about 60h. I was shocked at how productive it was.
It all comes down to escape hatches. Low-code tools have to have their escape hatches welded on, and hope they don’t interfere with anything else in how the tool expects things to work, whereas Rails is just an escape hatch ready to be used, because it’s part of the foundation.
Very cool! This is reminiscent of Interial Reference Systems, where it uses heading information, velocity, and other information to determine a flight’s relative position to a fixed starting point. Mentour Pilot has an interesting video on IRS/ADIRUs and other backup navigation systems in aviation.
I was hoping the article would end with a comparison of the flight’s track from flightradar / ads-b data and the calculated track from this… I wonder how accurate it is
Beautiful.
Obligatory: https://www.youtube.com/watch?v=iI8zPbEHRl0
I don’t often comment, but I came here to paste that very quote in. Don’t ever give up!
I think that short passage was the part of the article I spent the longest on. And for a relatively short, simple website, there’s a lot of character and heart to the design and the presentation of it.
Thank you for sharing. I had no idea of the existence of the
includeIfdirective. Shamefully, I had resorted to just having no global user.name or user.email and setting it on a repo by repo basis. For me, just detecting by directory is plenty, as all of my work related code is within a directory of my companies name.I saw someone over on HN who used
includeIfto differentiate based on SSH identities, presumably by using a SSH Host entry, so both the SSH and Git identities are tied together.I’ve been using Cursor at work since they pay for a license, and it’s reinforced to me that I don’t think I’d ever use it outside of that.
A) I don’t really want to give money to the current crop of AI companies, between the moral and environmental costs, B) It’s unhelpful at least as often as it’s helpful, so even when it does give me something correct, it takes me about as long to verify its correctness as if I wrote it myself in the first place, and C) I enjoy the exercise of doing even the mundane things. It’s working those mental muscles to keep myself sharp, as well as just enjoying the zen of programming. The same thing that drew me to this line of work in the first place.
Now I’m interested in seeing what happens if you try to open that menu using, e.g. Vimium. Besides the omnipresent likelihood that you can’t even get a link hint for the menu item, what behavior do you get? Keyboard, mouse, none at all, or something else?
There’s also some interesting caveats with assistive technology like JAWS/NVDA, where the order of events might be different, or different events might be fired as linked in this post from the follow up post that caius shared in this thread (albeit the author has confirmed that at least some of these have changed). I imagine some of the quirks would probably be similarly present when using Vimium? It’d be interesting to test.
There’s also a followup post explaining in more detail why things were used the way they were https://www.joshtumath.uk/posts/2024-11-18-how-i-refactored-the-bbc-navigation-bar-and-a-follow-up-faq/
Getting feedback directly from the author of the PointerEvents spec is very cool, even if the web has changed underneath it. And it just goes to show how important reading a spec can be, specifically the unambiguous definition of
pointerIdvspointerType.There’s some really cool insights on software testing in here. The article it links to on tautological tests is a great read on its own, and a good reminder that it’s easy to fall into the trap of testing the libraries you’re relying on, rather than your own code if you’re just trying to write tests/hit test coverage.
Package managers really should have some sort of level of trust associated with packages. Not inferred trust from arbitrary metrics like number of downloads, more like actual vetting from trusted sources. A nice idea might be to have a number next to a package of all the sources that officially trust the package, and you can click the number to see the list of sources - be they websites, approval from notable figures in the community attached to the package’s profile itself, whatever. A particular source can be personally un-trusted, and people could even kick around custom lists of trusted and untrusted sources/packages, kind of like an adblock rule list.
Of course, we’ll never actually have something that nice. That would simply be too convenient.
Cargo vet tries to do this for rust: https://github.com/mozilla/cargo-vet
I’m not aware of other languages having similar things.
Even still, trusted packages can be hijacked. Malware on the dev’s machine, supply chain attacks on upstream packages, malicious contributors playing the long game à la Jia Tan, or a package changing hands via purchase/a maintainer stepping down due to lack of time or burnout.
Cryptographic signatures could tamp down on some of these, but having endorsements could give a false sense of security, or introduce complacency. It’s a hard balance, since all it takes is one moment where your vigilance isn’t as high as it should be, and you could be infected with malware.
“But sometimes” is a very dangerous and short-sighted reason to stall technological progress
A sophisticated targeted attack can hijack the repository of a trusted package. A burglar can break into my house and steal all my money. A superpower can decide they don’t like living things anymore and launch an ICBM somewhere important. None of these are excuses as to why we can’t have a more rigorous process of gaining user’s trust or separating known packages from unknown ones, instead of simply letting them all float in the same pool of indistinguishable packages where the only difference is the name and the number of downloads. Because right now, that is what users are using as a trust system, people are already using a trust system, and it’s far less reliable than what an actual, intentional system of trust reified into the package manager itself would most like be.
It’s interesting to see a platform basically turned into a Quine as a Service. Some very cool info on Deno’s modular permissions too. Going through the deployed handlers, there’s some interesting examples that have tried to escape the sandbox it looks like.
I may be in the minority, but I can definitely notice when an app takes more than one frame (definitely less than 100ms!) to respond to my clicks—though most apps I use at work don’t do much animation (Atlassian sticks out like a sore thumb, though I don’t think even adding animations can save them…)
I do feel like even when an app does have animations, that 100ms is noticable. Navigating menus in Google Calendar vs Telegram Desktop is like a night and day difference.
This makes me wonder what the psychological effect of animations on our perception of sluggishness is. I bet there are studies about this, I’m just too lazy to look for them ^^’
It’s definitely a scale. There are some apps where I find myself frustrated with the animations before it even finishes. Often times I just want to do something, and if it feels like it’s trying to make a whole presentation out of it, it irks me more than if it were actually just sluggish, especially if it’s in response to something like a button click. I should probably catalogue some of the worst offenders at some point, just so I have a frame of reference. Though I think slide-in/expanding sidebars that reshape the main content are probably some of my least favorite ones to see.
I think the apps and sites that use animation the best are some where you don’t even necessarily notice them.
I love stories of people just making stuff for personal use. No big sell to move your own website over, just a story of solving a pain point.
This! I just created a template language in a Rust project of mine. After I stopped, I looked and realized “oh, this is just Handlebars, but bad. Nice!”
If you’ve got the drive to make your own version of {{tech_name}}, do it. Even if it’s not as good, there’s so much value in understanding how something works. Plus, there’s no worry about versioning moving out from under you, about some breaking change you weren’t expecting.
Registrars:
Email:
Servers:
Other Services:
I also have a Zimablade sitting by my router hosting my HomeAssistant, Jellyfin, UptimeKuma and a few miscellaneous services. At some point I really should consolidate, but I ended up spreading out to different services that best fit my usecase at that particular time. Hetzner I liked, but they didn’t have any US options when I was using them, and the latency for game servers was untenable in the US.
Going to try my hand at Unity again with the game dev courses I’ve picked up from years of hoarding Humble Bundles, and quite possibly Starfield.