Sorry if I sound like a broken record, but this seems like yet another place for Nix to shine:
Configuration for most things is either declarative (when using NixOS) or in the expected /etc file.
It uses the host filesystem and networking, with no extra layers involved.
Root is not the default user for services.
Since all Nix software is built to be installed on hosts with lots of other software, it would be very weird to ever find a package which acts like it’s the only thing on the machine.
The amount of nix advocates on this site is insane. You got me looking into it through sheer peer pressure. I still don’t like that it has its own programming language, still feels like it could have been a python library written in functional style instead. But it’s pretty cool to be able to work with truly hermetic environments without having to go through containers.
I’m not a nix advocate. In fact, I’ve never used it.
However – every capable configuration automation system either has its own programming language, adapts someone else’s programming language, or pretends not to use a programming language for configuration but in fact implements a declarative language via YAML or JSON or something.
The ones that don’t aren’t so much config automation systems as parallel ssh agents, mostly.
Yep. Before Nix I used Puppet (and before that, Bash) to configure all my machines. It was such a bloody chore. Replacing Puppet with Nix was a massive improvement:
No need to keep track of a bunch of third party modules to do common stuff, like installing JetBrains IDEA or setting up a firewall.
Nix configures “everything”, including hardware, which I never even considered with Puppet.
A lot of complex things in Puppet, like enabling LXD or fail2ban, were simply a […].enable = true; in NixOS.
IIRC the Puppet language (or at least how you were meant to write it) changed with every major release, of which there were several during the time I used it.
As I’ll fight not to use SSPL / BUSL software if I have the choice, I’ll make sure to avoid GNU projects if I can. Many systems do need a smidge of non-free to be fully usable, and I prefer NixOS’ pragmatic stance (disabled by default, allowed via a documented config parameter) than Guix’s “we don’t talk about nonguix” illusion of purity. There’s interesting stuff in Guix, but the affiliation with the FSF if a no-go for me, so I’ll keep using Nix.
Using unfree software in Guix is as simple as adding a channel containing the unfree software you want. It’s actually simpler than NixOS because there’s no environment variable or unfree configuration setting - you just use channels as normal.
Please do NOT promote this repository on any official Guix communication channels, such as their mailing lists or IRC channel, even in response to support requests! This is to show respect for the Guix project’s strict policy against recommending nonfree software, and to avoid any unnecessary hostility.
That’s exactly the illusion of purity I mentioned in my comment. The “and to avoid any unnecessary hostility” part is pretty telling on how some FSF zealots act against people who are not pure enough. I’m staying as far away as possible from these folks, and that means staying away from Guix.
The FSF’s first stated user freedom is “The freedom to run the program as you wish, for any purpose”. To me, that means prioritizing Open-Source software as much as possible, but pragmatically using some non-free software when required. Looks like the FSF does not agree with me exercising that freedom.
The “avoid any unnecessary hostility” is because the repo has constantly been asked about on official Guix channels and isn’t official or officially-supported, and so isn’t involved with the Guix project. The maintainers got sick of getting non-Guix questions, You have an illusion there’s an “illusion” of purity with the Guix project - Guix is uninvolved with any unfree software.
To me, that means prioritizing Open-Source software as much as possible, but pragmatically using some non-free software when required.
This is both a fundamental misunderstanding of what the four freedoms are (they apply to some piece of software), and a somewhat bizarre, yet unique (and wrong) perspective on the goals of the FSF.
Looks like the FSF does not agree with me exercising that freedom.
Neither the FSF or Guix are preventing you from exercising your right to run the software as you like, for any purpose, even if that purpose is running unfree software packages - they simply won’t support you with that.
Neither the FSF or Guix are preventing you from exercising your right to run the software as you like, for any purpose, even if that purpose is running unfree software packages - they simply won’t support you with that.
Thanks for clarifying what I already knew, but you were conveniently omitting in your initial comment:
Using unfree software in Guix is as simple as adding a channel containing the unfree software you want. It’s actually simpler than NixOS because there’s no environment variable or unfree configuration setting - you just use channels as normal.
Using unfree software in NixOS is simpler than in Guix, because you get official documentation, and are able to discuss it in the project’s official communication channels. The NixOS configuration option is even displayed by the nix command when you try to install such a package. You don’t have to fish for an officially-unofficial-but-everyone-uses-it alternative channel.
I sort of came to the same conclusion while evaluating which of these to go with.
I think I (and a lot of other principled but realistic devs) really admire Guix and FSF from afar.
I also think Guix’s developer UI is far superior to the Nix CLI, and the fact that Guile is used for everything including even configuring the boot loader (!).
Sort of how I admire vegans and other people of strict principle.
OT but related: I have a 2.4 year old and I actually can’t wait for the day when he asks me “So, we eat… dead animals that were once alive?” Honestly, if he balks from that point forward, I may join him.
OT continued: I have the opposite problem: how to tell my kids “hey we try not to use the shhhht proprietary stuff here.
I have no trouble explaining to them why I don’t eat meat (nothing to do with “it was alive”, it’s more to help boost the non-meat diet for environmental etc reasons. Kinda like why I separate trash.). But how to tell them “yeah you can’t have Minecraft because back in the nineties people who taught me computer stuff (not teachers btw), also thought me never to trust M$”. So, they play Minecraft and eat meat. I … well I would love to have time to not play Minecraft :)
I was there once. For at least 5-10 years, I thought Nix was far too complicated to be acceptable to me. And then I ran into a lot of problems with code management in a short timeframe that were… completely solved/impossible-to-even-have problems in Nix. Including things that people normally resort to Docker for.
The programming language is basically an analogue of JSON with syntax sugar and pure functions (which then return values, which then become part of the “JSON”.
This is probably the best tour of the language I’ve seen available. It’s an interactive teaching tool for Nix. It actually runs a Nix interpreter in your browser that’s been compiled via Emscripten: https://nixcloud.io/tour/
I kind of agree with you that any functional language might have been a more usable replacement (see: Guix, which uses Guile which is a LISPlike), but Python wouldn’t have worked as it’s not purely functional. (And might be missing other language features that the Nix ecosystem/API expects, such as lazy evaluation.) I would love to configure it with Elixir, but Nix is actually 20 years old at this point (!) and predates a lot of the more recent functional languages.
As a guy “on the other side of the fence” now, I can definitely say that the benefits outweigh the disadvantages, especially once you figure out how to mount the learning curve.
The language takes some getting used to, that’s true. OTOH it’s lazy, which is amazing when you’re trying to do things like inspect metadata across the entire 80,000+ packages in nixpkgs. And it’s incredibly easy to compose, again, once you get used to it. Basically, it’s one of the hardest languages I have learned to write, but I find it’s super easy to read. That was a nice surprise.
Well, most of the popular posts mainly complaint about the problems that nix strive to solve. Nix is not a perfect solution, but any other alternative is IMO worse. The reason for nix’s success however is not in nix alone, but the huge repo that is nixpkgs where thousands of contributors pool their knowledge
Came here to say exactly that. And I’d add that Nix also makes it really hard (if not outright impossible) for shitty packages to trample all over the file system and make a total mess of things.
I absolutely agree that Nix is ideal in theory, but in practice Nix has been so very burdensome that I can’t in good faith recommend it to anyone until it makes dramatic usability improvements, especially around packaging software. I’m not anti-Nix; I reallly want to replace Docker and my other build tooling with it, but the problems Docker presents are a lot more manageable for me than those that Nix presents.
although I have the curse of Nix now. It’s a much better curse though, because it’s deterministic and based purely on my understanding or lack thereof >..<
How is it better to run a service as a normal user outside a container than as root inside one. Root inside a container = insecure if there is a bug in docker. Normal user outside a container typically means totally unconfined.
No, root inside a container means it’s insecure if there’s a bug in Docker or the contents of the container. It’s not like breaking out of a VM, processes can interact with for example volumes at a root level. And normal user outside a container is really quite restricted, especially if it’s only interacting with the rest of the system as a service-specific user.
Is that really true with Docker on Linux? I thought it used UID namespaces and mapped the in-container root user to a pin unprivileged user. Containerd and Podman on FreeBSD use jails, which were explicitly designed to contain root users (the fact that root can escape from chroot was the starting point in designing jails). The kernel knows the difference between root and root in a jail. Volume mounts allow root in the jail to write files with any UID but root can’t, for example, write files on a volume that’s mounted read only (it’s a nullfs mount from outside the jail and so root in the container can’t modify the mount).
Broadly, no. There’s a mixture of outdated info and oversimplification going on in this thread. I tried figuring out where to try and course-correct but probably we need to be talking around a concept better defined than “insecure”
Sure, it can’t write to a read-only volume. But since read/write is the default, and since we’re anyway talking about lazy Docker packaging, would you expect the people packaging to not expect the volumes to be writeable?
I don’t see how. With Docker it’s really difficult to do things properly. alock presumably has an extremely simple API. It’s more like saying OAuth2 is insecure because its API is gnarly AF.
Docker solves two problems: wrangling the mess of dependencies that is modern software and providing security isolation.
Nix only does the former, but using it doesn’t mean you don’t use something else to solve the latter. For example, you can run your code in VMs or you can even use Nix to build container images. I think it’s quite a lot better at that than Dockerfile in fact.
How is a normal user completely unconfined? Linux is a multi-user system. Sure, there are footguns like command lines being visible to all users, sometimes open default filesystem permissions or ability to use /tmp insecurely. But users have existed as an isolation mechanism since early UNIX. Service managers such as systemd also make it fairly easy to prevent these footguns and apply security hardening with a common template.
In practice neither regular users or containers (Linux namespaces) is a strong isolation mechanism. With user namespaces there have been numerous bugs where some part of the kernel forgets to do a user mapping and think that root in a container is root on the host. IMHO both regular users and Linux namespaces are far too complex to rely on for strong security. But both provide theoretical security boundaries and are typically good enough for semi-trusted isolation (for example different applications owned by the same first party, not applications run by untrusted third parties).
I like a lot of this advice, parts like “always return objects” and “strings for all identifiers” ring with experience. I’m puzzled that the only justification for plural names is convention when it’s otherwise not at all shy of overriding other conventions like 404s and .json URLs. It’s especially odd because my (unresearched) understanding is that all three have a common source in Rails API defaults.
The difficulty with 404 is that it expresses that an HTTP-level resource is not found, and that concept often doesn’t map precisely to application-level resources.
As a concrete example, GET /userz/123 should (correctly!) 404 because the route doesn’t exist, I typoed users. But if I do a GET /users/999 where 999 isn’t a valid user, and your API returns 404 there as well, how do I know that this means there’s no user 999, instead of that I maybe requested a bogus path?
Yeah, you should, because if we require that there be enough fine-grained status codes to resolve all this stuff we’re going to need way more status codes and they’re going to stop being useful. For example, suppose I hit the URL /users/123/posts/456 and get a “Route exists but referenced resource does not” response; how do I tell from that status code whether it was the user or the post that was missing? I guess now we need status codes for
Route does not exist
Route exists but first referenced resource does not
Route exists but second referenced resource does not
Remember: REST is not about encoding absolutely every single piece of information into the verb and status code, it’s about leveraging the capabilities of HTTP as a protocol and hypertext as a medium.
There’s yet another interpretation too, that’s particularly valid in the wild! I may send you a 404 because the path is valid, the user is valid, but you are not authorized to know this.
Purists will screech in such cases about using 403, but that opens to enumeration attacks and whatnot so the pragmatic thing to do is just 404.
The last time this came up, I said “status codes are for the clients you don’t control.” Using a 404 makes sense if you want to tell a spider to bug off, but if your API is behind auth anyway, it doesn’t really matter what you use.
one of the greatest risks is not that chatbots will become super-intelligent, but that … systems operating without evidence or logic could become our overlords by becoming superhumanly persuasive, imitating and supplanting the worst kinds of political leader.
Well before that happens, such systems will support and empower the worst kinds of political leader.
I’m expecting an onslaught of fully-automated, GPT-powered, X-amplified bullshit during the 2024 US elections that will make us nostalgic for the onslaught of merely human-generated bullshit in 2020.
systems operating without evidence or logic could become our overlords
Let’s be honest, we already have that. It’s called governments and large corporations. Through emergent complexity, these entities are as inscrutable as neural nets. These systems have developed a life of their own and are governed by rules that no human can understand or control, almost like an alien species that settled down on this planet. Thus, the fear is largely unfounded as we already find ourselves in this scenario and have been living like this for a long time.
I’m not so worried about that. Those political leaders can already employ teams of propaganda specialists to craft their messages. Things like ChatGPT are also going to empower the most toxic kind of social media ‘influencer’ by giving them tools that are almost as good.
The problem here is again that of scale. If you can outbullshit the other guy, people will never have a chance of seeing his truth, just more bullshit from everywhere.
I agree that the problem is scale, I just don’t see it being led by politicians. Established politicians already employ teams of psychologists to identify bullshit that will resonate with their target audience. The amount of bullshit they can generate is a function of money and they have a lot of money. At the moment, there are few politicians as a proportion of the general population. Things like ChatGPT make this kind of thing accessible to the general public. Any random conspiracy theorist can easily flood the world with bullshit that supports their world view.
TL;DR: the guy runs his dev env at home, then remotes into it with VSCode.
My question is about this:
I’m sure that the latency from, say, Australia will not be great, but editing in VS Code means you’re far less latency-sensitive than using something like VIM over plain SSH - all the text editing is still happening locally, and just the file saving, formatting, and terminal interaction is forwarded to the remote server.
Is there a neovim plugin of some sort that could do this? Replicate the stuff locally and then do synchronisation under the hub?
Yes, I’m aware, but I thought that reads/writes directly on the net. What the author in the post is saying, VSCode will make a local copy of the file, then all your operations are fast, and it will silently take care of the synchronization for you. So like if you were to :w the file, it might have some noticable latency, while in VSCode you would not see the latency - you just save the local file, and go on working, while Code does the sync.
I don’t think that this is what the author is saying. They seem to be saying that with vim over ssh, your keystrokes are sent over the network, so every letter you type gets round trip latency; when you edit with vscode’s remote support, the keystrokes stay local, and only saving and formatting goes remote.
It should be clarified that VSCode doesn’t do “file synchronization”. It does much more than that: all of the language support (indexing, completion, etc.) and many of the extensions you install run remotely too. I’m saying this because I often see it compared to Emacs’ tramp, and I do not think tramp does any of this… or at least I haven’t gotten it to…
I’m saying this because I often see it compared to Emacs’ tramp, and I do not think tramp does any of this… or at least I haven’t gotten it to…
tramp does execute most, if not all commands, remotely for remote buffers so things like grepping or LSP tend to work correctly via tramp if the tools are installed on the remote machine.
Does it? My most recent experience seemed to imply that things like the spell checker were running on my client machine, not the remote… And I’m not sure I ever saw it running rust-analyzer on the remote machine in the past. Is there any magic to configure?
This has some downsides too. It means that your remote machine has to be capable of running all of the developer tools. This is great for the Azure use case: your remote machine is a big cloud server, your local machine is a cheap laptop. It’s far more annoying for the embedded case: your local machine is a dev workstation, your remote machine is tiny device with limited storage, or possibly a smallish server with a load of boards connected to it.
Agreed, I was trying to use it to connect to a production server, not for main development but for quick tweaks. It installed so much stuff on the remote server that it slowed it way down. Scared me, didn’t try again.
Does it need to be a text editor feature? I haven’t used it for codebases of any great size but SyncThing is up to the job as far as I know; someone gave a lightning talk at PGCon 2019 about their workflow keeping an entire homedir mapped between two computers.
I use the VSCode remote stuff all day every day, but previously I used vim over ssh all day every day, so whatever.
Also, I sometimes use vi between the US and Australia and it’s really not that bad. I’d rather use something like vim that’s just a fancy front-end to ed/ex. Trans-pacific latency’s got nothing on a teletype…
Yes, I know and I do that occasionally. But I don’t think it would work if I had to do it non-stop, as my primary activity. The latency is barely noticeable but it’s there. I remember that from my operations days.
When Vercel released Next.js 14 recently, some friends I’ve talked to where still on Next.js 12 and really felt the pressure to upgrade to not fall behind even more.
/me looks at the code powering my main blog since 2004… still the same Blosxom script…
I agree, this is a huge sticking point in the whole story. I think this is kind of a forced thing. For a static data, you only need an offline way to regenerate script. Even if you had a script from 2004 - if it still works, it can regenerate stuff when you add a new markdown file.
The problem here is, will Next.js 12 work in 19 years, like your bloxsom script? Maybe, maybe not. So you go and upgrade, just to be sure.
I did and now I do have a website but there’s no RSS feed and the entries are sorted randomly and not by date. At least I can deploy via git push, but I’m actually kind of missing a WYSIWYG text editor for quick notes. And to create a new page I need to mkdir article_name and add it to a list in a Python script, kind of sucks really.
I agree, the biggest barrier for me when it comes to writing blog posts isn’t the site generation or deployment or formatting, it’s actually typing the words out. And Hugo/Jekyll/… don’t help with that problem.
I use most of things like these as aliases. Usually you’d see no difference.
But if it’s a function, and you call it from somewhere else like gnu screen or some other scirpts, it exits the shell after it’s done. Which isn’t always what you want.
Could those critics have been nicer? Sure, that would have bruised my ego less. But disabled people and accessibility advocates don’t owe it to me or to anyone to be nice.
I think it’s important to distinguish between “not nice” and “cruel”. When I was on Twitter, I’d regularly see this:
A person does an accessibility fail, like leaving out alt-text from the images.
Someone calls them out for the fail.
Dozens of other people pile on the callout, send angry DMs, harass them in unrelated tweets, etc.
I’ve lived parts of my life with people who used topics like accessibility as an opportunity to be cruel and vindictive. So while it’s important to not have an ego about it, and accept that people are frustrated for legitimate reasons, not everybody shaming you want you to Do Better. Some just want to be mean.
not everybody shaming you want you to Do Better. Some just want to be mean.
oh this is quite generous. in the general case (i can’t speak to accessibility shaming in particular but i doubt it’s an exception), i’d argue the overwhelming majority of morality police of any stripe, especially on twitter, simply enjoy seeing the faces of the pilloried covered in rotten tomato. a sadism ticket with a free righteousness stamp – intoxicating!
there are some, a small minority, who genuinely care, and because they want to be effective they are generally nice about things.
I don’t know what is it like to need assistive technologies like this. I don’t think the people who do need these owe anyone anything. And I certainly don’t think they aren’t frustrated by everything.
But I always believed that it’s better to be nice than mean. Not because other people deserve it or something, but simply because for me, being negative brings more negativity to me, basically I’m making myself even less happy then I was.
I know it’s not the same for everybody, but what I have experienced, what I have observed, and what I’ve learned from psychology and other areas, all that tells me it’s gonna work for majority of people. Maybe not overwhelming majority, but a majority.
Now, maybe being cruel here isn’t about how the cruel guy feels. Maybe it’s just an act, to get someone to change something. Maybe something else entirely.
But I definitely think that being nice is better then being cruel most of the time. Especially when you want other people to change something that concerns you.
It’s hard for me not to be cynical about this—yet another bespoke browser written in Javascript, because, you know, the @#$@#$@ browser you’re using just isn’t good enough (I’m looking at you, Portland Pattern Repository Wiki). I just know people will fork this to ensure you will get the “enhanced experience” with their website (or more like, “no experience except with their special Javascript browser”).
Oh, but if you’d read the article, you can see that the main purpose of this isn’t to be the next big thing, the author just wanted to have fun with something of a silly scope that they tackled anyway.
You’ll have to be more specific. What part of C2 is “powered by” a browser engine..?
To be clear, a “browser engine” is basically the whole web browser except for the UI. It’s Chromium’s Blink, Firefox’s Gecko, Safari’s WebKit. Its job is to parse HTML and CSS and JavaScript and render the page based on the HTML+CSS and expose the document to the JavaScript through the DOM. I don’t understand how such a thing could possibly “power” a wiki.
<div id=tab>
<img src="spin.gif">
<p>This site uses features not available in older browsers.</a></p>
</div>
and a bunch of Javascript. This Javascript will then query the server for the page you requested, which is returned as JSON. It then parses the JSON to pull out the content, which is not in HTML but possibly Markdown? Its own markup language? Anyway, it then translates this text to HTML and uses that to populate the DOM, replacing the above text. At least, if the browser you are using has the latest and greatest Javascript. If not (or you have Javascript turned off), all you get is the above.
Okay, so maybe it’s a stretch to say that it’s rendering the HTML directly onto a browser canvas, but C2 isn’t exactly an application like GMail. But it’s a text only site that worked for decades without Javascript. The current version (which hasn’t been updated since 2015) now displays all the C2 links you click on an ever-expanding canvas that requires not only vertical scrolling, but horizontal too. I personally don’t think it’s better, and find it visually cluttered and probably plays hell with accessibility, hence I consider it a “bespoke web browser”. If C2 wasn’t such a good source of information, it wouldn’t be worthy of thought, but alas …
What you’re describing is simply a bunch of client-side JavaScript generating HTML or DOM nodes. That’s a standard single-page application style site. That’s not even remotely close to what a “browser engine” is. The only browser engine involved when browsing C2 (or using GMail or Google Docs for that matter) is still just the one in Firefox or whatever browser you’re using.
You can be annoyed at websites which rely on client side javascript, that’s fine, but don’t let that anger bleed over to something that’s just a completely unrelated cool project to build a new browser engine.
I had a C128 which is not as much a cult icon as the C64. I completely get the nostalgia hit of pounding away on the first computer you learned programming on. I have some nostalgia for the VIC-20 which I never owned but was my first exposure to computing.
I also get a nostalgia hit from seeing the iconic screen displays of past computers I worked on, even more recent ones like Sunsparcs.
What I personally wouldn’t do is buy at great expense a board that half emulates a C64 or Spectrum 48K or something and looks like some random box or circuit board.
If I had the space and money for it, I would buy an actual C64 off ebay.
What I personally wouldn’t do is buy at great expense a board that half emulates a C64 or Spectrum 48K or something and looks like some random box or circuit board.
Yeah, I’m glad they’re doing what they want… but this doesn’t seem like my dream either. Maybe I just am missing out not having been a C64 user (though I think I’d feel the same for a similar Atari or Apple take) but this project has always seemed to me to go down a kind of weird rabbit hole of retro fetishism in some ways yet stayed boringly bland modern tech in others.
Regardless of how sympathetic I’d like to be towards the project, I find myself very much agreeing with the older Byte Attic’s post where he elaborates on serious shortcomings of the X16 and how it just misses the point. 8BG does an amazing job when it comes computing documentaries and I find his content entertaining but with regard to this specific project he seems to have fallen victim to a sunken cost fallacy.
My impression is that it kind of started from an impossible vision and then just spiraled from there through “compromise” decisions, without much consideration if the mix made much sense, especially since a lot of it is pushing up the price - which means the audience is additionally limited through that.
Yeah, that seems likely. I haven’t followed the project much, but I thought early on the whole reason behind starting a new design was to go for low price. Didn’t 8BG talk about working with the Foenix people for a while previously, but leaving for his own thing when it became clear what their price point was going to look like? (I swear I remember seeing this, but can’t find it in his old videos now.) That seems to not have panned out at all given that the all-in price for this once you add the power supply and case and keyboard to get a completish system is about the same as the similar base level of their machines.
Yes. When I want my hit of looking at a display from ye olde times, I fire up Vice, DosBox or some such and play a game. If I had the space and budget I might buy an actual ZX Spectrum, C64, C128, BBC Micro but as I get older I just hate the junk I’ve accumulated, so I probably wont.
I own 2 ZX Spectrums. One is my original unit from 1986, bought ex-demo so a poor student could afford it. The second a gift about 15Y ago.
I also own an Atari ST, Amiga, Archimedes, Sinclair QL, Amstrad PCW, a VAXstation, Macs with 68000, 68030, G3, G4 and G5 CPUs, and more. All were either free or very cheap, which is why I get these things: they were exotic expensive things I couldn’t afford when young, but a decade or two later were cheap or free. I am not a “collector” and I don’t buy and sell. I try to save what is worth saving.
I don’t and never did collect 1980s 8-bits: there were just too many of them, and most were frankly rubbish. A friend of mine did, amassed a more or less complete collection, and 20Y later realised he never ever played with them, sold the lot, and bought a used, first-model-year, Tesla Model S in cash.
Note for Americans: I’m European. We had lots of computers you’ve never heard of. Yours were very expensive here and thus poor value for money, so we didn’t buy many of them… and so were and are your cars and motorcycles. All of them are expensive, brash, loud, toys for the rich here.
(Comparably, I have 5 IBM Model M keyboards, but all were free: I saved them from being recycled or thrown away.)
Emulators have no feel. You don’t get to know what it was like to use a Spectrum by running a program on a different computer with a different keyboard, any more than you find out what it’s like to drive a Lamborghini by having one as your desktop wallpaper, or get the feeling of walking through a mountain rainforest by browsing a photo gallery, or of riding a racehorse by watching a lot of Tiktok videos.
These things are all still out there. They’re all still cheap if you look hard. Very few were rare or exclusive in their time, which is why I do not own a NeXTstation or a Lisa.
This is not an expensive hobby unless you’re a fool who actively looks for ways to get ripped off by strangers on the internet. Unfortunately, those are the loud folks on social media.
Obviously I can’t answer for the OP, but I can explain some of the nostalgia for the C64.
It was one of the first mainstream affordable home computers that wasn’t horribly compromised. For its time (1982) its spec of 64kB RAM was very generous, for its generation it had excellent sound and very good graphics. It was an unashamed games computer, with hardware sprites and colourful graphics, built-in joystick ports, a good keyboard, and a 40-column screen display. This was considerably better than its predecessor, the VIC-20, which had just 5kB of RAM and a 22 column display, and it was much cheaper than its late-1970s forerunner machines such as Commodore’s PET, the Apple II and the various models of TRS-80.
The C64 also supported floppy disk drives as standard.
What gets less coverage is that it was still a very compromised machine. It was expensive – $595 at launch – had an absolutely terrible BASIC inherited directly from the PET, which destroyed the language’s reputation, as I have blogged about to widespread criticism. The disk drives were expensive, tiny in capacity, and horribly horribly slow.
But it dominated the American market and I’ve found that many American home-computer owners are unaware that in other countries there were less-compromised machines for better prices, such as the Sinclair ZX Spectrum, which also sold in the millions of units, even excluding nearly 100 compatible clones, and the BBC Micro, also 6502-based, which also sold in huge numbers and whose successor model was the first affordable RISC-based computer, and spawned a line of CPUs that is the best-selling in history, outselling all x86 chips put together by 100x over, and whose original 1980s OS is still maintained and is now FOSS.
I’ve also seen the C64’s very weird 1985 successor machine, the C128 (twin processors, with a Z80 and – very slow – CP/M compatibility, thus totally failing to address the C64’s core market of video-gaming children) called “the last new 8-bit computer”, although in other countries there were new 8-bit machines being launched well into the 1990s.
Rival machines like the ZX Spectrum were usually cheaper, had better BASICs (not hard), more and faster storage, but worse keyboards and worse graphics and sound, to keep costs down.
All the same, the C64 was the first computer of many millions of people – the first model sold something like 6 million units – and so a lot of people are very nostalgic for it.
For those interested in the history, “Commodore: A Company on the Edge” is pretty good. Not the best writing, but fascinating factually. (I was a ZX Spectrum + Apple II kid, fwiw.)
IIRC, the reason for the old version of BASIC was that they had managed to negotiate a fixed-price rather than per-install license for the earlier version and didn’t want to pay more.
There are a couple of other reasons for the popularity of the C64:
Over time, Commodore worked hard to constantly drive the prices down, and they eventually got extremely low.
The sound chip, developed by Bob Yannes, was astonishingly capable for the time.
Unlike the Apple II, the C64 video graphics memory wasn’t totally bonkers to save a couple of chips, and it also had hardware sprites. That made it much better for games, and much easier to program graphics. I believe it also had scanline-based interrupts allowing all kinds of beam-chasing shenanigans.
The C64 disk drive was comically slow compared to the Disk II from Apple — some people consider the Disk II design the best exemplar of Woz’s genius, but it did have some interesting characteristics: eg. a bunch of C64s in a school computer lab could share a couple of drives.
Hey, don’t dish my speccy, that rubber keyboard with 4, 5 symbols per key was unparalleled at the time! :)
On the serious note, to add to the answer you provided. One of the big things about c64, speccy and all these microcomputers was that it was approachable and learnable. I doubt anyone knew all the poke addresses by heart, but the concept was clear and you could program whatever you wanted with just maybe the reference manual, even as a kid. Try giving a reference manual to any modern computer to anyone today.
I know it’s a different thing, but for that era, when we played space Invaders it whatever was popular at the arcades back then, it was magical to be able to do it yourself.
Right, I don’t know if I misspelled that or got corrected.
I had one too. I stuck it in a LMT 68 FX2 – in fact, specifically, this is my keyboard I think. I was not a fan of the rubber keys. ;-)
At the time it didn’t know better keyboards and this one seemed awesome, compared to the Comodores that my friends had. The thing that was important to me was that you had practically entire basic right there in the open.
I do sometimes wonder if it’d be possible to construct a semi-modern computer of comparable complexity.
I think I saw some Kickstarter and similar efforts. But not just that, people are still looking for and finding working examples.
Could we do a C21 version? Maybe based around RISC-V, and Oberon as the OS/language?
Like I mentioned above, the big thing for me was that all of basic was right there. If I didn’t know what the words meant, I could randomly play with it and stumble upon something interesting. Sometimes I even knew what it did :)
I thought about getting something similar for my kids. Just give it to them and see what they come up with. There are better suitable programming languages and environments for learning, but it’s not the same as knowing the poke address of the entire computer.
Since in hindsight I chose a bit of an inexpressive title, tl;dr: breakelse is a new keyword that jumps directly to the else block of an if. With a bit of syntax sugar, this gives an alternative to conditional-access operators.
You could have said so right away, not make me give up after a bunch of misdirections and tangents. I even scrolled down looking for what it actually is, saw the example, but wasn’t sure what exactly it did since I didn’t read all the text preceding it.
I mean, I liked that you go in-depth with the reasoning, but it felt like the tangents on furries and step-by-step dives deeper into the problem before you’re fully explaining it are just there to keep me on the page longer.
But it’s maybe just my preferences.
Back on topic though.
Anyway it’s neat that you do Neat. The breakelse keyword itself is also…neat, but I am not sure I like the semantics of it. Again it’s probably just my preference for simple things without a lot of magic and suspense, but going deeper into an if branch and then still possibly breaking off in a different direction… It just looks like overcomplicated algorithm to me. As @cpurdy says, I’d rather just use goto.
And another thing which is probably not relevant, is compiler backend. If it can’t trust the if to discard a whole false branch, it’s gonna be tricky to optimise this code. It looks like it would be trivially easy to put a tricky bit like this on a hotpath.
I do very little of lower-level programming these days and I’m probably out if the loop but this seems like an interesting thought experiment, but I don’t see it going much beyond that.
Thanks for your feedback, I added a tl;dr section!
Yeah, ? is a very “high-level” tool: as a language feature, it’s basically “get rid of anything from that sumtype on the left that isn’t the actual value”. It gets rid of error types by returning them, too. But at least it’s explicit about it - every time a ? appears in the code you can mentally draw flow arrows going to the closing } of the if block and the function. It’s the “clean this up for me, do whatever you think is right” operator.
Performance-wise, note that conditional access has exactly the same problem but worse. With ?, you branch once - to the end of the if block. With ?. for conditional access, you have to keep branching every time you perform another chained operation on the value you’re accessing.
I’m not sure what you mean with the “false branch” bit? The tag field of the sum type is an ordinary value and subject to constant propagation like anything else. The .case(:else: breakelse) branch will be removed if the compiler can prove that the sumtype will never be in :else.
I agree with this list of reasons, though my take is that the core reason Git is hard is because it solves a different problem than people think it does, and all of these are simply effects of that. And the problem it does solve is gnarly, one which it doesn’t look to simplify or abstract away but simply give the tools to properly deal with. I recommend everyone who uses Git to read the Git book which is very approachable and well made. Once you understand the why and how, Git actually becomes straightforward to work with.
The core problem Git gives the tools to solve is an offline-first, distributed consensus with divergence.
Offline-first because otherwise it would really limit its effectiveness (and hence why the remotes are fuzzy/time-lag in the article)
Distributed consensus to allow multiple developers to work on the same project, in all different collaboration patterns, and all end up with the same history at the end. (hence why a commit is its “entire workflow” [which nit; this is a weird way to say a commit is its entire history])
with divergence because Git allows you to indefinitely fork a project with absolutely no overhead or logistical work required.
The reality is that Git is solving for a problem that most people don’t have, and especially newer developers aren’t familiar with. Git was designed to solve the problems that face large Open Source projects with many independent developers with little coordination and all styles of workflow. And at that, Git is incredibly effective. But if you are working on a small team on an isolated part of the project, you need none of those tools and something far simpler would suffice.
Once you appreciate the problem its trying to solve, and realize that the Git CLI is really just a set of tools to modify a merkle-tree, it becomes far less alien. Which is not to say that Git isn’t hard to learn – it is – but not because it has a lot of quirks or is really weird, its just solving a more complicated problem and requires up-front effort to properly understand what its doing and why.
Now whether this is good for one of the first tools many people have to learn to even get started with programming to involve advanced topics that are overwhelming is another question. I’m curious if there is another set of trade-offs which are better for this. Mercurial is supposedly a lot easier to learn, and I wonder if something like Pijul will also turn out to be easier to understand.
I don’t think these constraints explain git’s difficulty. For example, none of these really require supporting repos staying in detached HEAD state, or using jargon like “detached HEAD”.
Fair enough, I’m not here to defend Git’s… interesting CLI UX choices. Though I would argue that the whole detached HEAD state is to facilitate easy forking (sometimes the commit you want to fork on is not a named branch).
Git is a surprisingly thin layer on top of the merkle tree (barring compression/optimizations) and a lot of CLI weirdness is pretending that it’s not and that it’s a VCS.
I think you’re spot on in that it solves a problem few people or orgs have.
Have have used subversion both professionally and privately for more than a decade. Now everyone uses git so I have to use that professionally but privately I keep using svn. It solves the problems I actually have. It’s metaphors and concepts maps nicely to my actual work.
It is weird because much of my professional work would fit just fine in svn as well. Much of that work is strictly hierarchically organized. There is no decentralized development like there is in open source projects.
I know that git is cool. But if I compare the hours spent in frustration over not knowing how to do X in git vs svn the amount of hours wasted is ridiculous.
I know people will likely laugh at me for using subversion…
I used SVN at work. I hated it. The only feature I liked about SVN was the ability to check out a subdirectory, and I don’t even miss it that much with git with my own stuff. Everything else about SVN was a pain. Working from home and the VPN goes down? No more check-ins for a while. Made a bit too many changes? No interactive staging. And too many times I commited a file by mistake into SVN. Branching under SVN was too much overhead, and forget easy merges.
With that said, SVN did fit work much better—centralized, micromanaged, hierarchically organized (in fact, those are the very reasons SVN was originally picked, and at that time; using git should be a fireable offense (per the person responsible for choosing SVN)). Switching to git was painful (was, because I left before it was finished), with management pushing the use of git submodules (which don’t work at all in my opinion).
I used CVS back in the day, and it made sense. I use Git today, and it also (mostly) makes sense. SVN, though, threw me for a loop, and I never got a handle into just how it works, and how it represents history and branches. Even using a GUI (SmartSVN) didn’t help.
svn branches were very easy to use until you starting merging things. Nothing was kept track of so you’d merge a branch, add to it, and then have to go check your notes to see what you had already merged. Total nightmare.
I started with svn early on, and I didn’t think much about it. Then I had used git for some reason (needed it for work or some side project, I don’t remember any more). I don’t remember the reasons, but I remember that it felt so much better then svn. I think I even started using it for my svn work project for a while before I left.
One of the reasons I do remember is that it was offline-capable. Well, not offline, but local-first. I could do whatever locally, have a million little branches and experiments, and mix and match them in any way I wanted. And I couldn’t do most of these things with svn (or didn’t know how).
But I also think svn had some strong sides, and I don’t think people using it are laughable.
Agreed. I used SVN at work around ten years ago - bliss. Just worked. Always. Fitted so seamlessly into the workflow I hardly noticed it. Git is an obstacle. An endless series of obstacles.
And at that, Git is incredibly effective. But if you are working on a small team on an isolated part of the project, you need none of those tools and something far simpler would suffice.
My axe to grind on this is when there was this course for teaching programming to 8 year olds, so to keep it simple they taught with Python, and Git.
Git is pretty easy to use if you are the only person working on a repo (as i imagine most people learning to program are). It’s only when you have to deal with branching and merging that it becomes painful.
I think your list misses an item, which should probably be the first. Scale. Like, Git just skips basic table-stakes critical functionality in its sync because at scale of Linux kernel nobody would be able to afford to do it this way. Like, really, you cannot opt to sync reflog? Its crash-consistency is written to be correct on a specific filesystem, because performance is that critical at the desired scale. This filesystem effectively no longer exist (ext3 slightly changed the behaviours in some edge cases) so Git is not hard-powerdown-safe anywhere anymore, by the way.
I know Git’s model, I know Monotone’s model (which has more moving parts overall but is outright better due to layering, unless you are optimising performance at expense of both safety and clarity). No, it does not make Git easy because I still need to work around its limitations, and script around its annoying assumptions. But also to figure out its flags in case I want to do anything I don’t do every day, this doesn’t help.
Funny enough, I feel like all of the recent issues with Git have been due to the fact that it doesn’t scale particularly well given modern workloads (hence the creation of Git-LFS which to this day I have mixed feelings about). But I agree that Git is heavily optimized for performance at a particular scale (ie. Linux kernel size). I have heard about Git’s lack of crash-consistency (I wish I could find it, but there’s an article out there which basically says only sqlite gets it right for local DBs [Git is really just a local database]).
One aspect I find particularly fun about VCS is the tradeoffs between performance & space – there is really no one-size-fits-all. Git uses snapshot as its primitive, which is space-inefficient if done naively (hence it uses packfiles which are actually diffs under-the-hood for compression) and Mercurial uses diffs as its primitive, which has poor performance (hence why its uses snapshots under-the-hood every few diffs for performance). Then you have Darcs and Pijul which both uses patches (which as I understand it is a formalization of diffs) where Darcs has poor performance and Pijul has fixed those issues, and both, in theory, allow for greater scale.
In any case, yeah Scale is a big topic for VCS and largely the answer seems to be it depends. Algorithmically I’ve heard it said that Git has a worse big O but a better constant, and Mercurial has a better big O but worse constant. Performance curves intersect at some point.
Git-LFS is more about Git being source control, not really general version control. A different coordinate of scale, so to say.
Local bespoke VFS-es for Git are about source control, but they arise because people want kind of Git but beyond the scale where local copies are practical. Of course, Subversion has supported narrow clones since forever but these make no sense for Linux kernel build process, so no support in Git.
Yes, using SQLite is one of the many many many reasons I still use Monotone for my personal stuff. Fossil, naturally (being written by the author of SQLite), also uses SQLite. Pijul tries to implement its own copy-on-write database — available as a generic crate, I don’t think it got much external stress-testing yet. Funnily enough, libgit2 seems to allow making a git client with SQLite (or PostgreSQL if one so desires) storage for the repository, but I don’t know if anyone tried to push that for personal use. I have personally had git repositories busted beyond repair on a hard powerdown.
A pessimistic answer to scale is that Git has networking-effect lock-in over the niches it doesn’t really care about serving. And I guess it’s uncomfortable to take the defeatist-sounding path even if it’s the right direction. This prevents an actual solution from arising. Shallow checkouts, narrow checkouts, true-last-change tracking without keeping a pristine copy (for large files — still beats reliability of FS timestamp based stuff like Unison, and with checksums and a few full clones you can recover if you need to), first-class «extract this subproject from a subtree into an independently clonable sub-repo»…
I completely agree about the network-effect lock-in. For better or worse, Git is here to stay. Society is much like evolution — it doesn’t necessarily optimize for best but just good enough.
I will say though in Git’s defense, very few companies out-scale Git. I’ve worked with a 50M line codebase and Git had no trouble with it. What I think is needed is tooling to make working with multi-repos easier, because I think with better tooling they solve a lot of Git’s issues (ie. narrow clones).
Working with multi-repos as it currently stands is pretty awful unfortunately.
Python’s slow for some stuff, fast for other stuff. I noticed that processing a very large JSON file I happen to have to deal with was significantly faster with Python than Rust+serde_json - even in a release build.
Python libraries don’t have to be written in Python – they can be Python wrappers around other languages like C. IIRC the standard-library json module is a wholesale adoption of the third-party simplejson package, which provided both Python and C implementations. And there are other specialized third-party JSON encoder/decoder packages out there like orjson which are built for speed and use implementations in C.
I am surprised. I’d expect python Jason parsing to be reasonably fast but I would have expected creating all the python dictionaries and lists to just be mallocing in a tight loop and slow. Can I ask if you were parsing into a rust struct or having rust create some sort of equivalent nested map/list structure?
That quote from the first paragraph was fun to read.
[language for] systems programming, to mobile and desktop apps, scaling up to cloud services
I didn’t think you’d scale up to cloud services. I mean, if you do them by the various best practices, they’re pretty simple and straightforward. At least in my experience, both using and building them. Systems programming or even desktop apps seems much harder to me and I would need to “scale up” to do it right.
The article is interesting, but omg that linked article on one week of bugs got me rolling on the floor laughing.
As I read the article it occurred to me that none of the excuses given really let stale-bots off the hook. In every case, leaving issues unread and/or unhandled is IMO less rude than having a bot close them. It’s the difference between saying “I can’t get to this right now” and saying “Your contribution has no value to us now, and never will.”
Sorry if I wasn’t clear. To clarify, leaving a PR unread says “I can’t get to this right now” but doesn’t make any judgement about the PR itself, and leaves open the possibility that the maintainer (or some future maintainer) will look at it. Having a bot close the PR says “this is worthless to us and we will never look at it.”
I assumed we’re talking about issues, not PRs, but I still disagree.
Bot closing the PR did not say “worthless”. It just said, “stale”. The PR (or an issue) going “stale” is almost literaly the function of the person “not having the time to do this right now”.
In certain cases, not responding to someone’s work can be worse. “Great, I opened a PR months ago and they’re ignoring me!” vs “I opened a PR, but since nobody had time for it, it got marked as stale.”.
I think the underlying issue is often people attributing things to events that are not necessarily there. Throughout the whole comment thread here on lobsters, people are expressing differing opinions. But look at the observable facts:
A maintainer has a publicly accessible repository.
A user is using it. They found a bug and reported it.
The bug got marked as stale.
A thousand people will interpret it a thousand different ways. Should I report a bug if I find it? Am I allowed to ignore it? If I do report a bug, am I entitled an answer? Or at least a look? Or 5 minutes of that maintainer’s time? We just assume, since the project is public, that it is also a community project, and there is an obligation on the maintainer to maintain.
Personally I remember the “good” old times of BOFHs and every question you ask on mailing lists being answered with RTFM. I’m happy that it’s not like that any more - for most things that I need, I can probably ask a question, someone’ll answer me. But I’m still keeping that mindset of old ways of open source - if I have a problem, I am in charge of fixing it. And I’ll send the patch back to source, or perhaps a PR or a comment, but I don’t have expectations that anything will be done about it.
But I understand this isn’t how most people expect things to work.
Why did not they make their stance clear? Make it clear you’re not accepting contributions.
Why should they? They’re just sharing their work.
Basic etiquette.
I think, with time, people will start to recognize that stale-botting is just putting a robotic coat of paint on the same boorish disrespect for others that is on full display in places like Twitter/X and YouTube comments.
If you don’t want to follow the implicit norms of the greater community, state so up-front. You don’t need to write a fancy legal document… just have the backbone and common decency to tell people what to expect of you, rather than ghosting them.
All of the perspectives raised carry an undercurrent of “I’m so special that I don’t have to engage in common decency”. Heck, now that I think about it, I think part of the reason stale-botting irks me so much is that, on some level, it feels like the decision to use a stale bot is passive-aggressive behaviour.
I dunno, I believe users who treat open source projects like they’re a product they’re paying money for, and who demand to be treated as paying customers, are the ones who are boorish and disrespectful.
Which is exactly the passive aggressive behavior the thread starter is referring to. I would say if you don’t have the mental energy, maybe let the issues sit open until you do.
Why is it passive aggressive? They have have a project, they are happy to let others use it, but they aren’t (yet) expecting to follow everyone else’s needs. Let them deal with their project how they wish, without having to apologize to everyone for everything in advance.
Because human communication has norms. If I spit in your face. It doesn’t matter if I think it’s not a grave insult. What matters is what the cultural norms we exist in consider it to be.
Yes, that’s my point exactly. Why did the commenter above think that this behavior is passive-aggressive? Because I don’t, and they apparently do.
So yes, human communication has norms, but as far as I know, one of them is “my house, my rules”. Why should users expect that some project runs issues just the way they, the users like it? Why can’t they just abide by the rules of the house? The maintainers could very well be passive-aggressive, but maybe they’re not, maybe they just don’t have a lot of time or interest in chasing bugs.
That’s what I’m referring to - we cannot assume someone has a certain state of mind (passive aggression) based only on the fact that they employ a stalebot. We would need other source of data. Stalebot is most certainly a norm that some projects use - but not all - so why can we say it’s wrong to have them?
Yes, that’s my point exactly. Why did the commenter above think that this behavior is passive-aggressive? Because I don’t, and they apparently do.
I’m the person who said it originally and I said it feels like passive-aggressive behaviour and that’s why it bothers me so much.
So yes, human communication has norms, but as far as I know, one of them is “my house, my rules”. Why should users expect that some project runs issues just the way they, the users like it? Why can’t they just abide by the rules of the house? The maintainers could very well be passive-aggressive, but maybe they’re not, maybe they just don’t have a lot of time or interest in chasing bugs.
Because you’re publishing your project into a public space with defined expectations. Psychologically, it’s closer to renting a space in a shopping mall than a private home, and people are upset at “how rude the staff are to window-shoppers”.
If you don’t want people wandering into your rented commercial space with expectations of your behaviour, then post proper signage. Just because the “mall” has effectively no size limits and offers space for free doesn’t change that.
Again, this is very situation-specific. I personally had had office space rented, when I was freelancing a lot. It was a small space next to a cafe. I didn’t mind people smoking on the caffee’s terrace, but I did mind if they did on mine. (I really didn’t. But it works for the example).
Now, I can expect that people will come in my commercial space because it’s next to that cafe. And I can also expect that some of them will be smoking a cigarette. Which is also fine. But despite all that, if I ask them to put the cigarette out, or if I stale-bot their stale issue, I don’t think I’m aggressive. I just think I have my rules, and in my (rented, commecial-space) back yard, I should not have to feel pressured to explain the reasoning behind those rules.
Once more: the bottom line is that the passive-agressive feeling is very personalized. I may really like some behavior where you might not, and we’ll feel different kinds of pressure when witnessing said behavior. That’s why I think that the passive-agressive comment can’t really stand as it is overly generic.
The comment about it feeling passive-aggressive was secondary anyway. The primary point was that it was an active violation of the etiquette of the public space they set up shop in.
The funny thing about my mall example is that you’re responding to draft #2, where I now think draft #1 addressed your point better. Originally, I couched it in terms of setting up a library in a mall unit and then sending out a “shush bot” to patrol the segment of hallway in front of it rather than setting up proper sound insulation.
It’d just make people want to kick the bot and resent its owner.
Yeah, the passive-agressive stuff is not really relevant. I still disagree your point though. Why is my issue tracker, for my project, public space? Why do passers-by get to dictate my rules?
I don’t think we’ll agree on this. You seem to think that issues should not be automatically cleaned up by stale-bots. I disagree - not in that that I think that they should be automatically cleaned up, but in that that the choice is on maintainers and maintainers alone, not on the users of the project.
Do you mind if I ask a question or two to try to get a sense of your position on that before we part ways?
I’d like to get a sense for what you believe is reasonable to allow the project maintainers to decide. For example, I doubt you’d argue that GitHub should be prohibited from kicking users off their platform for gross abuses (eg. using a bug tracker to coordinate treasonous or genocidal activities), so there has to be a line you draw and the question is where you draw it.
(eg. Which side of the line is it on for you if maintainers choose to set a “norm” where they verbally abuse people who report bugs?)
Oh, but those two aren’t exactly the same, deciding whether to use a stale bot vs. being abusive, it’s pretty much different things we’re talking about.
To be upfront, I believe it’s absolutely reasonable for the platforms to deal with abuse. I also think it should be required, but it’s hard to set the limits of “what is enough”. But that topic is moderation, we’re talking stalebots here.
If you’re asking me if I would use a stale-bot, probably yes. I frequently have some inner-source or closed source projects, so it’s a bit different, but I would still use a stale-bot if I didn’t have someone who’s already triaging the backlog. That is not applicable here, though, since I don’t think I ever had a project with 300 open issues. Or if I did, they would be neatly “organized” into epics and stories and stuff, and out of my “current sprint” or whatever, and I would actually be paid and reserve the time to triage them.
For open source, I probably would like some automatiuon if I didn’t have the manpower.
Do you think, what’s my position in general? After a small consideration, I believe on most of these things I am quite liberal.
Each person gets to decide for themselves, as long as they don’t impinge on other people’s freedoms or break public rules. It may be sucky, and we have to work on that, but as long as it’s “allowed” by the general rules, we can’t and should not force anyone to do anything a particular way.
But even more, I don’t like “having a position” at all, mostly. I like having observations on how things work. You know how it is in software: for most questions, an experienced developer will answer majority of questions with “it depends”.
It is very often a question of a trade-off, not of “should we or should we not”.
In this case (stalebot), I think it is absolutely okay for maintainers to decide to use a stale bot. 100% their choice. Even if it was something that is as widely used as linux kernel - they are the maintainers, and they have to maintain the thing, so they get to pick what they want to do about stale issues.
People picking stalebots are probably not doing so much triage. I’m thinking e.g. Angular, where I frequently found open issues on my topic that are untouched for a long time. Sucks, I have no clue what’s with the issue, why is it still open, is it being worked on or what. (To be honest, I haven’t visited Angular GitHub repo in a while now).
People leaving open tickets probably do a bit more triage, but the already mentioned Angular example shows it’s not always the case.
As an end user in such a project, I am stuck either way. Either the issue is stale-closed, or it’s open but untouched, and I have no clue what to expect. Well, with a stale-closed issue I can probably expect nothing, with the open-but-untouched issue, I probably have nothing to look forward to as well, but there’s some false hope that it may be tackled sometime.
So, for me as a user it is usually irrelevant about what the maintainers decided.
Is it okay for Microsoft/GitHub to say, “no stalebots allowed”? Absolutely. Their company, their rules. We do have a choice (and I mostly do choose to go elsewhere). So now if maintainers want a stale bot, they have to go to GitLab or something.
Again, all perfectly within the bounds.
I don’t think my thoughts on any of that matter, though, they are just observations. Again, this is more a trade-off, then a final position - what is more valuable to me, at this time, vs. maintainer, vs Microsoft.
I don’t know if I answered your somewhat-open-ended question well enough, feel free to ask more.
I think you answered it fairly well and I’m not opposed to stale bots in general. In fact, I think it’s probably a good idea to have bots that tag things as “stale” for easier filtering for or against. It’s just the closing of issues that is the problem.
Likewise, I’m much less opposed to stale-botting with closure if the maintainers post clear notice so people know what they’re getting into before they invest their time, but I’ve said that before, so let’s not start that up again.
A house is a poor framing for open source…it lacks the collaborative element.
Or… maybe it’s apt after all, but only if we think of it like a community barn-raising. You need a barn, so all your neighbors help you raise it. In turn, you’re expected to help them out when their time comes.
And if one of your neighbors barn gets struck by lightning and burns down, you’d have to be a terrible person not to offer to let them store some of their stuff in your barn until a new one is built.
No, I didn’t mean for the house to be the analogy, I meant, it’s their project, not a community project.
I get that a random third-party can get interested and involved and challenge the rules, but it’s still those project’s rules.
Look at the extremes. If someone came, and raised a pull-request on your latest zig hobby project that rewrites everything in Rust, or PHP, or whatever else, you would probably reject it. You made the technology choice for your own reasons. Perhaps to learn zig, perhaps for the fancy comptime. Rejecting that PR is pretty straight-forward, right? They can offer reasons (“it’s better”, “it’s faster”, “it’s slower” or similar). They can open a discussion. But with or without the discussion or reasons, if you simply reject the pull request with just a note “We decided to do this in Zig”, it would not be a problem, right? It was your technological choice.
I see it the same with the stale-bot choice. It was your project-management methodology choice. You could accept discussions, or be willing to change your potentially-inefficient project management ways, but if you don’t it’s your choice, since it is your project.
I know the two decisions are not in the same area (tech choice vs project management methodology choice). But those are the chosen ones for that particular project anyway.
That is what I meant by “my house, my rules”.
Does my reasoning make sense to you? I mean, I get that you still may not agree, but can you at least accept my point of view on why we shouldn’t expect people to stop using stale-bots, even if it’s inferior?
I understand your framing, yes. I just don’t think it’s correct to ignore the community aspects. And a PR about rewriting in another language is a bit of a straw-man; it’s an exaggerated hypothetical scenario that isn’t the real problem that arises.
Consider it from the perspective of copyright instead. My contributions to another project are under my copyright. It’s standard now to force contributors to assign their legal rights away (via CLAs), but the copyright still remains mine. Without CLAs, every project would be forced to treat their contributors as full participants. CLAs distort that.
Regardless, a contribution of code and assignation of legal rights is a gift, and warrants social consideration, if not legal consideration. It can be mostly, but not solely, your project.
Yes, I’m aware that that was an extreme example, I wrote so. I’m just saying that it’s very clear that there is some boundary where it is okay for me to have my rules for my project, and regardless of the quality, size, value of your contribution, I do not have to accept that contribution. Now, if we can agree to that - that some things are under my control, I just think that most people will have different opinions on what things are mine alone.
I think it’s acceptable for maintainers to decide on the stale-bot rule. I may think that in some cases it’s wrong, in some others it’s the correct way, but in no case do I think that I have any say in their decision. I may provide input, or express my wishes or preferences. I may try to persuade them, or pay them or whatever, to take that decision one or another way.
But whatever they decide, I don’t think it should be considered rude, anti-social, not-in-spirit-of-open-source or any of the things this entire comment section is mentioning - it’s just a simple project management decision, in my eyes, and it is theirs to make.
Taking the time to write up and submit a good bug report (potentially including investigation of relevant code, surrounding behaviours etc) is not treating a project the same as a product you’re paying money for. Having such bug reports closed by a bot is pretty annoying. I don’t want to waste my time submitting bug reports if the maintainers aren’t interested in actually fixing bugs, but this has happened on any number of ocassions (not always with stale bots involved, sometimes the bug just sits in the bugtracker forever) with various projects.
Sure, there are lousy users with excessive expectations and demands as well. That doesn’t justify ignoring quality bug reports. If you don’t want bug reports, don’t have a public bug tracker, or at least make it clear exactly what kind of bug reports you do and don’t want and what sort of expectations the submitter should have. As a maintainer you don’t have the right to waste people’s time either.
Let’s say that in this situation, the project doesn’t use a stale bot, but everything else remains the same. Your finely crafted high-quality issue goes unacknowledged and un-actioned indefinitely. The end result is the same: you feel rejected and disrespected.
Not using a stale bot is not going to make a maintainer action your issue faster.
Ah, but having a huge list of unanswered issues is a red flag! You would not contribute time and effort to such a project. And that’s true. So what you need to do now, before submitting an issue, is to check the list of Closed issues, and eyeball if they’re closed by a stale bot or not. This is a tiny bit of extra work but less than submitting a good bug report and then nothing happening.
No, if my bug goes perpetually un-answered, I assume many things (maybe the developer is overworked. Maybe, like me, they have ADHD. etc.) but I don’t feel actively rejected and disrespected.
If they use a stale-bot, it feels actively user-hostile. It says “this person doesn’t even have the decency to leave my report to languish”.
This is pushing the responsibility on to the wrong person. It’s easy enough not to think to check through issues to see if they’re being dealt with appropriately; especially so if there’s a detailed issue template (for example) that makes it look the project takes bug reports seriously and there are no obvious “website is only held together with sticky-tape” signs that hint a project isn’t really maintained. I don’t think to check for auto-closed bugs before opening a bug report (but thanks for the suggestion, it’s something I might try to do in future); the tendency for projects - even large, supposedly maintained projects - to simply ignore bug reports (or auto-close them) isn’t something I expected until bad experience taught me otherwise, with however much time wasted in the meantime.
The result: I tend to put less effort into bug reports now, if there’s any indication at all that there might not be any interest from maintainers. If the maintainer responds at all, I’m always happy to do more legwork, but speaking as a maintainer myself, this still isn’t optimal.
On the other hand it’s trivial for a maintainer to simply shut the issue tracker down, make it private, or at least stick a note in the issue template (or even the project README) that the issue tracker is for maintainer use only (or whatever), without risk of wasting anyone’s time at all and without generally bringing down the open-source experience. I would do this; I’m not asking anyone to do anything I wouldn’t do myself, and I think it’s a better outcome than either abandoning the issue tracker or auto-closing bugs in it. But if it has to be one of those, at least just abandoning the tracker makes the state of the project clear.
It’s later addressed in the post. Sure it’s “basic etiquette” once you know it. But you may not even be aware of it. The use of stale bots is not necessarily evil. These are just some unspoken rules we’ve come to agree on. And I think maintainers should be educated rather than shat on here.
Also lol @ the flags on this post. I searched for it before and didn’t see it already posted. And I have no idea how “no stalebots” could be on-topic but “yes stalebots” off-topic…
Users can flag stories and comments when there’s a serious problem that needs moderator attention; two flags are needed to add the story or comment to the moderation queue. Users must reach 50 karma to flag. To guide usage and head off distracting meta conversations (“Why was this flagged!?”, etc), flagging requires selecting from a preset list of reasons.
For stories, these are: “Off-topic” for stories that are not about computing; “Already Posted” for duplicate submissions and links that elaborate on or responses to a thread that’s less than a week old (see merging);
Several times, including as I read this comments section, I have considered writing a bot that automatically keeps issues open for me in the most aggressive projects. It could have rules about this - e.g. maybe it would refuse to work if the stale configuration only kicked in when a label (like “needs info”) had been applied. Stuff like that.
It feels disrespectful to maintainers’ choices, even if I disagree with that choice, which is why I probably won’t do it. But man do stale bots feel disrespectful of my time too, and they also feel frankly like the result of a shoddy engineering mentality.
I think you’re not the target audience. You’ll open an issue, include details, send info, talk to devs.
I’ve seen large open source projects having hundreds of issues, most of which are not issues but requests for help. The answers to which can be found in the docs, often.
The conflict here is that you think the issues serve you. And the developers think it’s for them, to plan their project.
Issues are often used both as “things to work on” as well as “bug reports”.
Separating these somehow would let the devs for the work, and reporters report.
Void does use a stale bot. Unless something is assigned to a maintainer, or is actively being worked, it inflates our counts of issues with weird one-offs that nobody including the reporter cares to diagnose. This hides active issues, or issues we consider important. The stale bot also serves as a reminder to maintainers to consider merges that were e.g. stalled waiting for feedback.
Our numbers being bad is demotivating. We don’t want to bulk-close issues, but “stale-ness” is a useful metric for what should dominate maintainer’s free time. Meanwhile it doesn’t delete the work, so it can be revived should it prove important (a quick search will pull up such issues/PRs).
I do not believe we “lock” issues or PRs after they are closed for being stale. We do not wish to silence people, simply to put things about which nobody cares out of mind.
The problem is you have to ignore it every time you review open issues. And every time you spend a bit of time on that. Even if it’s only a second, such issues pile up and the total time lost accumulates.
That shows a lack of communication within the team. You should have a unified vision of what you want your project to look like, which determines whether an issue is relevant.
Yes, you said that. I’m asking why do you think it. Does it show a lack of communication, or lack of time on behalf of the maintainer?
I agree that getting an issue automatically closed off as stale sucks. Like getting a canned reply from a job application - couldn’t they at least tell you why they didn’t hire you?
The alternative discussed here is ignoring the user. Like, the issue stands there forever open. To me, it looks like nobody cares. To me, that is the lack of communication. Like applying to a job and never getting anything back. Neither of these are good to me, because they give me nothing personal. But in one case, I’m ignored, in another, I at least got some closure and know my thing is not important any more to anyone. I can either give up hope, or try to reopen or something.
But I understand that this is my own perspective. That is why I am asking, why do you prefer getting ignored over getting the closure in the sense of “we never got around to your bug report. give up hope or take initiative.”
Of course, ignoring the issue altogether is not ideal, but I’d argue it’s still better than having a bot close it – at least other users who have the same problem can see that it has been reported and may be fixed at some point when the developers have more time. But you could make a tag saying something like “don’t have time to fix right now” and put it on such issues. That would also help the “open issues are cluttering my issues list” problem because you could just filter out this tag. And of course, if you know that you’ll never have time for fixing the issue, just close it (you can also make a tag for that).
Hmm. That’s may partially explain your “lack of communication” point of view to me. You would expect that someone would just never look at all the stale issues if the bot closes them, whereas if the maintainer does it manually, it is assumed they considered it.
I can understand that. But just as you argue that “it’s still better than having a bot close it”, I can also argue that sometimes the people have no time, or desire, or a habbit, to do this. Perhaps they do the triage, perhaps they don’t, some will simply find it better for them to have the bot “stale” the issue if it goes 3 months without activity.
But it’s not a perfect situation in any case. I just don’t think it’s a lack-of-communication problem, but rather lack-of-time or lack-of-resources problem. Or more often, “conscious choice on where to focus ones’ energy”. Not that I think it’s a better or worse way to do things, I just think it’s a valid choice.
To be clear, my comment on communication skills was just a response to you saying that when working in a team, you might not know whether an issue is relevant. It doesn’t apply in other situations.
No,I meant by that that you commented to someone else, that if nobody responds to an issue within 3 months, it gets autoclosed, shows a lack of communication within the team. That’s why I asked you think that using a stale bot means there is a lack of communication.
Personally I don’t think one way or another, I think these two points are not related. Whether or not a team uses a stale bot, and whether or not there’s a lack of communication are separate points, they don’t seem to be co-dependent to me (unless of course there’s a deeper reasoning).
Go look at the organization. We have a stale bot, but how recently was the last “closed for stale” issue? The stale bot is for times when nobody actually wants to do the work and the work might be in scope, if someone anywhere was able to prioritize it.
Sometimes, sure. But if it is your default policy then it takes very little effort to ignore a new issue.
If you’re putting in the effort to properly triage the issue then, sure, it’s not much more effort to tag the issue up or leave a short comment, but triaging is work and leaving public comments is social risk: you may embarrass yourself by saying something wrong and/or may invite rude or tiring comments from people.
Related: I’m done with this thread. You haven’t been rude, but I am no longer interested in talking about it.
… if the project uses issues as a bug database. Perhaps the devs are using issues only for their own, internal project planning. Yes, They’re happy to share their work. They’re happy to sometimes pick up a few details, fix a bug, work on things that you report that align with their interest. Other times they will ignore the issue and let it get automatically closed.
I seriously think this is more a problem of people in this discussion assuming that issues are only used for bug reports. And that the developers want to cater to community. In reality, it’s often not the case. The devs will take input in the form of big reports, but not let those things guide their development roadmap.
This (and most of the comments so far) seems to neglect one very important detail:
No two readers are the same.
I can think of cases where someone would prefer either the left or the right:
Left:
You’re stepping through the code in a debugger
You’re looking for a tricky bug that’s hidden in the details
You prefer to think in low-level details and don’t want to be distracted by the high-level “systems” view
Right:
You’re trying to answer a question to a “business” person about what steps a process takes
You’re skimming through code, trying to figure out what various modules do
You prefer to think in high-level “systems” terms rather than get bogged down in low-level details
In any case, no matter how you write the code, someone will be unhappy with it. You can’t cater to everyone, yet everyone has legitimate reasons for preferring one or the other (depending on their personality OR their current task).
I can think of a few more important details neglected.
For one, how stable is this code? Is heatOven something that’s likely to be changed? Maybe to tweak the temperature, maybe the company frequently changes ovens, whatever. If that bit is gonna be fiddled with a lot, it probably makes sense to isolate it. If it’s actually stable, then meh, inline is probably fine.
That’s a good architectural reason to split the code though, it is not about readability.
But from that perspective, here’s another thing - how big is this code? It’s easy to read things inline like in this example when both of them together fit into the screen. But business-grade code is frequently going to be much thicker. And yes, baking pizza is simple in this example.
But what if he had to go fetch the flour first? And they’re out of whole wheat, can we use another type? Oh, no, wait, is the order gluten-free, which flour then? Oh, no, the flour shelf empty. I need to go fetch another one from the long-term storage. Do I put the order aside, or is the storage far away and I’ll have to throw away other ingredients?
And that’s just the first ingredient of that pizza. What about the toppings? What is a “proper heating temperature”?
In my eyes, it’d be much more readable to getFlour() inline, and deal with the fiddly stuff like business logic and infrastructural logic and retries and all the other fun stuff somewhere else.
This is where the first part (architectural stability) comes back into play. Of course I can make the whole pizza as a one long Pascal procedure. But am I going to be able to read the whole thing next summer when I’m called back from vacation because the CEO was showing off to his friends and wanted to have a goldfish pizza with crust plated in ivory and the thing didn’t work?
It’s funny that you say that because one of my references on this topic is this post by Martin Sústrik where he argues that inlining is useful precisely when context can change and you feel that the problem may become more convoluted (because it is hard to design abstractions that will remain valid in such cases).
I am the author of the blog post, and I think this is spot on. It probably just turns out I am the first guy here. I tend to run software in production and be paged to fix bugs (quickly) in my or other people’s code. I wrote here that one of my core principles is:
Think about debuggability in production. There is nothing worse than having your software break and not being able to figure out why.
I do think in high-level systems too but I do not need every single detail to be extracted to a function for this, I can do it in my mind. And having a “shallow stack” (fewer well-chosen abstractions) makes that easier for me too.
Sure, but the common denominator is bigger than we give it credit for. Though it would be easy to argue that even code locality depends on the use case: debugging vs answering a business question is a very good one.
Sorry if I sound like a broken record, but this seems like yet another place for Nix to shine:
/etc
file.The amount of nix advocates on this site is insane. You got me looking into it through sheer peer pressure. I still don’t like that it has its own programming language, still feels like it could have been a python library written in functional style instead. But it’s pretty cool to be able to work with truly hermetic environments without having to go through containers.
I’m not a nix advocate. In fact, I’ve never used it.
However – every capable configuration automation system either has its own programming language, adapts someone else’s programming language, or pretends not to use a programming language for configuration but in fact implements a declarative language via YAML or JSON or something.
The ones that don’t aren’t so much config automation systems as parallel ssh agents, mostly.
Yep. Before Nix I used Puppet (and before that, Bash) to configure all my machines. It was such a bloody chore. Replacing Puppet with Nix was a massive improvement:
[…].enable = true;
in NixOS.Time for some Guix advocacy, then?
As I’ll fight not to use SSPL / BUSL software if I have the choice, I’ll make sure to avoid GNU projects if I can. Many systems do need a smidge of non-free to be fully usable, and I prefer NixOS’ pragmatic stance (disabled by default, allowed via a documented config parameter) than Guix’s “we don’t talk about nonguix” illusion of purity. There’s interesting stuff in Guix, but the affiliation with the FSF if a no-go for me, so I’ll keep using Nix.
Using unfree software in Guix is as simple as adding a channel containing the unfree software you want. It’s actually simpler than NixOS because there’s no environment variable or unfree configuration setting - you just use channels as normal.
Indeed, the project whose readme starts with:
That’s exactly the illusion of purity I mentioned in my comment. The “and to avoid any unnecessary hostility” part is pretty telling on how some FSF zealots act against people who are not pure enough. I’m staying as far away as possible from these folks, and that means staying away from Guix.
The FSF’s first stated user freedom is “The freedom to run the program as you wish, for any purpose”. To me, that means prioritizing Open-Source software as much as possible, but pragmatically using some non-free software when required. Looks like the FSF does not agree with me exercising that freedom.
The “avoid any unnecessary hostility” is because the repo has constantly been asked about on official Guix channels and isn’t official or officially-supported, and so isn’t involved with the Guix project. The maintainers got sick of getting non-Guix questions, You have an illusion there’s an “illusion” of purity with the Guix project - Guix is uninvolved with any unfree software.
This is both a fundamental misunderstanding of what the four freedoms are (they apply to some piece of software), and a somewhat bizarre, yet unique (and wrong) perspective on the goals of the FSF.
Neither the FSF or Guix are preventing you from exercising your right to run the software as you like, for any purpose, even if that purpose is running unfree software packages - they simply won’t support you with that.
Thanks for clarifying what I already knew, but you were conveniently omitting in your initial comment:
Using unfree software in NixOS is simpler than in Guix, because you get official documentation, and are able to discuss it in the project’s official communication channels. The NixOS configuration option is even displayed by the nix command when you try to install such a package. You don’t have to fish for an officially-unofficial-but-everyone-uses-it alternative channel.
I sort of came to the same conclusion while evaluating which of these to go with.
I think I (and a lot of other principled but realistic devs) really admire Guix and FSF from afar.
I also think Guix’s developer UI is far superior to the Nix CLI, and the fact that Guile is used for everything including even configuring the boot loader (!).
Sort of how I admire vegans and other people of strict principle.
OT but related: I have a 2.4 year old and I actually can’t wait for the day when he asks me “So, we eat… dead animals that were once alive?” Honestly, if he balks from that point forward, I may join him.
OT continued: I have the opposite problem: how to tell my kids “hey we try not to use the shhhht proprietary stuff here.
I have no trouble explaining to them why I don’t eat meat (nothing to do with “it was alive”, it’s more to help boost the non-meat diet for environmental etc reasons. Kinda like why I separate trash.). But how to tell them “yeah you can’t have Minecraft because back in the nineties people who taught me computer stuff (not teachers btw), also thought me never to trust M$”. So, they play Minecraft and eat meat. I … well I would love to have time to not play Minecraft :)
I was there once. For at least 5-10 years, I thought Nix was far too complicated to be acceptable to me. And then I ran into a lot of problems with code management in a short timeframe that were… completely solved/impossible-to-even-have problems in Nix. Including things that people normally resort to Docker for.
The programming language is basically an analogue of JSON with syntax sugar and pure functions (which then return values, which then become part of the “JSON”.
This is probably the best tour of the language I’ve seen available. It’s an interactive teaching tool for Nix. It actually runs a Nix interpreter in your browser that’s been compiled via Emscripten: https://nixcloud.io/tour/
I kind of agree with you that any functional language might have been a more usable replacement (see: Guix, which uses Guile which is a LISPlike), but Python wouldn’t have worked as it’s not purely functional. (And might be missing other language features that the Nix ecosystem/API expects, such as lazy evaluation.) I would love to configure it with Elixir, but Nix is actually 20 years old at this point (!) and predates a lot of the more recent functional languages.
As a guy “on the other side of the fence” now, I can definitely say that the benefits outweigh the disadvantages, especially once you figure out how to mount the learning curve.
The language takes some getting used to, that’s true. OTOH it’s lazy, which is amazing when you’re trying to do things like inspect metadata across the entire 80,000+ packages in nixpkgs. And it’s incredibly easy to compose, again, once you get used to it. Basically, it’s one of the hardest languages I have learned to write, but I find it’s super easy to read. That was a nice surprise.
Python is far too capable to be a good configuration language.
Well, most of the popular posts mainly complaint about the problems that nix strive to solve. Nix is not a perfect solution, but any other alternative is IMO worse. The reason for nix’s success however is not in nix alone, but the huge repo that is nixpkgs where thousands of contributors pool their knowledge
Came here to say exactly that. And I’d add that Nix also makes it really hard (if not outright impossible) for shitty packages to trample all over the file system and make a total mess of things.
I absolutely agree that Nix is ideal in theory, but in practice Nix has been so very burdensome that I can’t in good faith recommend it to anyone until it makes dramatic usability improvements, especially around packaging software. I’m not anti-Nix; I reallly want to replace Docker and my other build tooling with it, but the problems Docker presents are a lot more manageable for me than those that Nix presents.
came here to say same.
although I have the curse of Nix now. It’s a much better curse though, because it’s deterministic and based purely on my understanding or lack thereof >..<
How is it better to run a service as a normal user outside a container than as root inside one. Root inside a container = insecure if there is a bug in docker. Normal user outside a container typically means totally unconfined.
No, root inside a container means it’s insecure if there’s a bug in Docker or the contents of the container. It’s not like breaking out of a VM, processes can interact with for example volumes at a root level. And normal user outside a container is really quite restricted, especially if it’s only interacting with the rest of the system as a service-specific user.
Is that really true with Docker on Linux? I thought it used UID namespaces and mapped the in-container root user to a pin unprivileged user. Containerd and Podman on FreeBSD use jails, which were explicitly designed to contain root users (the fact that root can escape from chroot was the starting point in designing jails). The kernel knows the difference between root and root in a jail. Volume mounts allow root in the jail to write files with any UID but root can’t, for example, write files on a volume that’s mounted read only (it’s a nullfs mount from outside the jail and so root in the container can’t modify the mount).
None of the popular container runtimes do this by default on Linux. “Rootless” mode is fairly new, and I think largely considered experimental right now: https://kubernetes.io/docs/tasks/administer-cluster/kubelet-in-userns/
https://github.com/containers/podman/blob/main/rootless.md
Broadly, no. There’s a mixture of outdated info and oversimplification going on in this thread. I tried figuring out where to try and course-correct but probably we need to be talking around a concept better defined than “insecure”
Sure, it can’t write to a read-only volume. But since read/write is the default, and since we’re anyway talking about lazy Docker packaging, would you expect the people packaging to not expect the volumes to be writeable?
But that’s like saying alock is insecure because it can be unlocked.
I don’t see how. With Docker it’s really difficult to do things properly.
alock
presumably has an extremely simple API. It’s more like saying OAuth2 is insecure because its API is gnarly AF.This is orthogonal to using Nix I think.
Docker solves two problems: wrangling the mess of dependencies that is modern software and providing security isolation.
Nix only does the former, but using it doesn’t mean you don’t use something else to solve the latter. For example, you can run your code in VMs or you can even use Nix to build container images. I think it’s quite a lot better at that than Dockerfile in fact.
How is a normal user completely unconfined? Linux is a multi-user system. Sure, there are footguns like command lines being visible to all users, sometimes open default filesystem permissions or ability to use
/tmp
insecurely. But users have existed as an isolation mechanism since early UNIX. Service managers such as systemd also make it fairly easy to prevent these footguns and apply security hardening with a common template.In practice neither regular users or containers (Linux namespaces) is a strong isolation mechanism. With user namespaces there have been numerous bugs where some part of the kernel forgets to do a user mapping and think that root in a container is root on the host. IMHO both regular users and Linux namespaces are far too complex to rely on for strong security. But both provide theoretical security boundaries and are typically good enough for semi-trusted isolation (for example different applications owned by the same first party, not applications run by untrusted third parties).
I like a lot of this advice, parts like “always return objects” and “strings for all identifiers” ring with experience. I’m puzzled that the only justification for plural names is convention when it’s otherwise not at all shy of overriding other conventions like 404s and .json URLs. It’s especially odd because my (unresearched) understanding is that all three have a common source in Rails API defaults.
The difficulty with 404 is that it expresses that an HTTP-level resource is not found, and that concept often doesn’t map precisely to application-level resources.
As a concrete example,
GET /userz/123
should (correctly!) 404 because the route doesn’t exist, I typoed users. But if I do aGET /users/999
where 999 isn’t a valid user, and your API returns 404 there as well, how do I know that this means there’s no user 999, instead of that I maybe requested a bogus path?From solely the status code, you don’t.
Fortunately, though, HTTP has a thing called the response body, which is allowed to supply additional context and information.
Of course, but I shouldn’t need to parse a response body to get this basic level of semantic information, right?
Yeah, you should, because if we require that there be enough fine-grained status codes to resolve all this stuff we’re going to need way more status codes and they’re going to stop being useful. For example, suppose I hit the URL
/users/123/posts/456
and get a “Route exists but referenced resource does not” response; how do I tell from that status code whether it was the user or the post that was missing? I guess now we need status codes forAnd on and on we go.
Or we can use a single “not found” status code and put extra context in the response body. There’s even work going on to standardize this.
Remember: REST is not about encoding absolutely every single piece of information into the verb and status code, it’s about leveraging the capabilities of HTTP as a protocol and hypertext as a medium.
There’s yet another interpretation too, that’s particularly valid in the wild! I may send you a 404 because the path is valid, the user is valid, but you are not authorized to know this.
Purists will screech in such cases about using 403, but that opens to enumeration attacks and whatnot so the pragmatic thing to do is just 404.
Perhaps a “204 No Content”?
That doesn’t convey the message “yeah, you got the URL right, but the thing isn’t there”
I think it basically does.
Well, it says “OK, no content”. Not, “the thing is not here, no content”. To me these would be different messages.
The last time this came up, I said “status codes are for the clients you don’t control.” Using a 404 makes sense if you want to tell a spider to bug off, but if your API is behind auth anyway, it doesn’t really matter what you use.
https://lobste.rs/s/czlmyn/how_how_not_design_rest_apis#c_yltriz
You never control the clients to an HTTP API. That’s one of the major selling points! Someone can always
curl
.Well before that happens, such systems will support and empower the worst kinds of political leader.
I’m expecting an onslaught of fully-automated, GPT-powered, X-amplified bullshit during the 2024 US elections that will make us nostalgic for the onslaught of merely human-generated bullshit in 2020.
Let’s be honest, we already have that. It’s called governments and large corporations. Through emergent complexity, these entities are as inscrutable as neural nets. These systems have developed a life of their own and are governed by rules that no human can understand or control, almost like an alien species that settled down on this planet. Thus, the fear is largely unfounded as we already find ourselves in this scenario and have been living like this for a long time.
I’m not so worried about that. Those political leaders can already employ teams of propaganda specialists to craft their messages. Things like ChatGPT are also going to empower the most toxic kind of social media ‘influencer’ by giving them tools that are almost as good.
The problem here is again that of scale. If you can outbullshit the other guy, people will never have a chance of seeing his truth, just more bullshit from everywhere.
I agree that the problem is scale, I just don’t see it being led by politicians. Established politicians already employ teams of psychologists to identify bullshit that will resonate with their target audience. The amount of bullshit they can generate is a function of money and they have a lot of money. At the moment, there are few politicians as a proportion of the general population. Things like ChatGPT make this kind of thing accessible to the general public. Any random conspiracy theorist can easily flood the world with bullshit that supports their world view.
TL;DR: the guy runs his dev env at home, then remotes into it with VSCode. My question is about this:
Is there a neovim plugin of some sort that could do this? Replicate the stuff locally and then do synchronisation under the hub?
Original vim ships with netrw, which enables stuff like
:e sftp://example.com/file/path.c
.I doubt that works with LSP or the like, however.
Yes, I’m aware, but I thought that reads/writes directly on the net. What the author in the post is saying, VSCode will make a local copy of the file, then all your operations are fast, and it will silently take care of the synchronization for you. So like if you were to :w the file, it might have some noticable latency, while in VSCode you would not see the latency - you just save the local file, and go on working, while Code does the sync.
I don’t think that this is what the author is saying. They seem to be saying that with vim over ssh, your keystrokes are sent over the network, so every letter you type gets round trip latency; when you edit with vscode’s remote support, the keystrokes stay local, and only saving and formatting goes remote.
Yes, exactly, I was a bit imprecise but this is the essence of my question.
It should be clarified that VSCode doesn’t do “file synchronization”. It does much more than that: all of the language support (indexing, completion, etc.) and many of the extensions you install run remotely too. I’m saying this because I often see it compared to Emacs’ tramp, and I do not think tramp does any of this… or at least I haven’t gotten it to…
tramp does execute most, if not all commands, remotely for remote buffers so things like grepping or LSP tend to work correctly via tramp if the tools are installed on the remote machine.
Does it? My most recent experience seemed to imply that things like the spell checker were running on my client machine, not the remote… And I’m not sure I ever saw it running rust-analyzer on the remote machine in the past. Is there any magic to configure?
Generally yes, see https://www.gnu.org/software/emacs/manual///html_node/tramp/Remote-processes.html for details
This has some downsides too. It means that your remote machine has to be capable of running all of the developer tools. This is great for the Azure use case: your remote machine is a big cloud server, your local machine is a cheap laptop. It’s far more annoying for the embedded case: your local machine is a dev workstation, your remote machine is tiny device with limited storage, or possibly a smallish server with a load of boards connected to it.
Agreed, I was trying to use it to connect to a production server, not for main development but for quick tweaks. It installed so much stuff on the remote server that it slowed it way down. Scared me, didn’t try again.
I use tramp in Emacs to do this; some brief searchengineering doesn’t find a vim version 🤷
I don’t use it, but vim-airsync has some stars and looks simplistic but perfectly plausible.
Does it need to be a text editor feature? I haven’t used it for codebases of any great size but SyncThing is up to the job as far as I know; someone gave a lightning talk at PGCon 2019 about their workflow keeping an entire homedir mapped between two computers.
Yes, there’s also use case for this. I was curious about neovim specifically in this case though.
I use the VSCode remote stuff all day every day, but previously I used vim over ssh all day every day, so whatever.
Also, I sometimes use vi between the US and Australia and it’s really not that bad. I’d rather use something like vim that’s just a fancy front-end to ed/ex. Trans-pacific latency’s got nothing on a teletype…
Mosh has helped me over a decade to deal with latency issues.
Yes, I know and I do that occasionally. But I don’t think it would work if I had to do it non-stop, as my primary activity. The latency is barely noticeable but it’s there. I remember that from my operations days.
/me looks at the code powering my main blog since 2004… still the same Blosxom script…
I agree, this is a huge sticking point in the whole story. I think this is kind of a forced thing. For a static data, you only need an offline way to regenerate script. Even if you had a script from 2004 - if it still works, it can regenerate stuff when you add a new markdown file.
The problem here is, will Next.js 12 work in 19 years, like your bloxsom script? Maybe, maybe not. So you go and upgrade, just to be sure.
I did and now I do have a website but there’s no RSS feed and the entries are sorted randomly and not by date. At least I can deploy via git push, but I’m actually kind of missing a WYSIWYG text editor for quick notes. And to create a new page I need to
mkdir article_name
and add it to a list in a Python script, kind of sucks really.I am more and more convinced, for playing with tech, go build SSGs. For writing blogs, use Publii or something.
I agree, the biggest barrier for me when it comes to writing blog posts isn’t the site generation or deployment or formatting, it’s actually typing the words out. And Hugo/Jekyll/… don’t help with that problem.
I use most of things like these as aliases. Usually you’d see no difference.
But if it’s a function, and you call it from somewhere else like gnu screen or some other scirpts, it exits the shell after it’s done. Which isn’t always what you want.
I think it’s important to distinguish between “not nice” and “cruel”. When I was on Twitter, I’d regularly see this:
I’ve lived parts of my life with people who used topics like accessibility as an opportunity to be cruel and vindictive. So while it’s important to not have an ego about it, and accept that people are frustrated for legitimate reasons, not everybody shaming you want you to Do Better. Some just want to be mean.
oh this is quite generous. in the general case (i can’t speak to accessibility shaming in particular but i doubt it’s an exception), i’d argue the overwhelming majority of morality police of any stripe, especially on twitter, simply enjoy seeing the faces of the pilloried covered in rotten tomato. a sadism ticket with a free righteousness stamp – intoxicating!
there are some, a small minority, who genuinely care, and because they want to be effective they are generally nice about things.
I don’t know what is it like to need assistive technologies like this. I don’t think the people who do need these owe anyone anything. And I certainly don’t think they aren’t frustrated by everything.
But I always believed that it’s better to be nice than mean. Not because other people deserve it or something, but simply because for me, being negative brings more negativity to me, basically I’m making myself even less happy then I was.
I know it’s not the same for everybody, but what I have experienced, what I have observed, and what I’ve learned from psychology and other areas, all that tells me it’s gonna work for majority of people. Maybe not overwhelming majority, but a majority.
Now, maybe being cruel here isn’t about how the cruel guy feels. Maybe it’s just an act, to get someone to change something. Maybe something else entirely.
But I definitely think that being nice is better then being cruel most of the time. Especially when you want other people to change something that concerns you.
It’s hard for me not to be cynical about this—yet another bespoke browser written in Javascript, because, you know, the @#$@#$@ browser you’re using just isn’t good enough (I’m looking at you, Portland Pattern Repository Wiki). I just know people will fork this to ensure you will get the “enhanced experience” with their website (or more like, “no experience except with their special Javascript browser”).
(Hot take, I know)
Oh, but if you’d read the article, you can see that the main purpose of this isn’t to be the next big thing, the author just wanted to have fun with something of a silly scope that they tackled anyway.
Huh what other browser engines are written in JS?
There’s the one that powers the Portland Pattern Repository Wiki (aka the C2 wiki).
You’ll have to be more specific. What part of C2 is “powered by” a browser engine..?
To be clear, a “browser engine” is basically the whole web browser except for the UI. It’s Chromium’s Blink, Firefox’s Gecko, Safari’s WebKit. Its job is to parse HTML and CSS and JavaScript and render the page based on the HTML+CSS and expose the document to the JavaScript through the DOM. I don’t understand how such a thing could possibly “power” a wiki.
When you visit C2, the only HTML you get is
and a bunch of Javascript. This Javascript will then query the server for the page you requested, which is returned as JSON. It then parses the JSON to pull out the content, which is not in HTML but possibly Markdown? Its own markup language? Anyway, it then translates this text to HTML and uses that to populate the DOM, replacing the above text. At least, if the browser you are using has the latest and greatest Javascript. If not (or you have Javascript turned off), all you get is the above.
Okay, so maybe it’s a stretch to say that it’s rendering the HTML directly onto a browser canvas, but C2 isn’t exactly an application like GMail. But it’s a text only site that worked for decades without Javascript. The current version (which hasn’t been updated since 2015) now displays all the C2 links you click on an ever-expanding canvas that requires not only vertical scrolling, but horizontal too. I personally don’t think it’s better, and find it visually cluttered and probably plays hell with accessibility, hence I consider it a “bespoke web browser”. If C2 wasn’t such a good source of information, it wouldn’t be worthy of thought, but alas …
What you’re describing is simply a bunch of client-side JavaScript generating HTML or DOM nodes. That’s a standard single-page application style site. That’s not even remotely close to what a “browser engine” is. The only browser engine involved when browsing C2 (or using GMail or Google Docs for that matter) is still just the one in Firefox or whatever browser you’re using.
You can be annoyed at websites which rely on client side javascript, that’s fine, but don’t let that anger bleed over to something that’s just a completely unrelated cool project to build a new browser engine.
C2 wiki isn’t a web browser. It’s just a Javascript-dependent website capable of displaying multiple documents.
I’ve never used this or the C64, the following question is not meant to criticize or demean:
What makes the C64 so good? Why is this your dream computer?
I had a C128 which is not as much a cult icon as the C64. I completely get the nostalgia hit of pounding away on the first computer you learned programming on. I have some nostalgia for the VIC-20 which I never owned but was my first exposure to computing.
I also get a nostalgia hit from seeing the iconic screen displays of past computers I worked on, even more recent ones like Sunsparcs.
What I personally wouldn’t do is buy at great expense a board that half emulates a C64 or Spectrum 48K or something and looks like some random box or circuit board.
If I had the space and money for it, I would buy an actual C64 off ebay.
Yeah, I’m glad they’re doing what they want… but this doesn’t seem like my dream either. Maybe I just am missing out not having been a C64 user (though I think I’d feel the same for a similar Atari or Apple take) but this project has always seemed to me to go down a kind of weird rabbit hole of retro fetishism in some ways yet stayed boringly bland modern tech in others.
Regardless of how sympathetic I’d like to be towards the project, I find myself very much agreeing with the older Byte Attic’s post where he elaborates on serious shortcomings of the X16 and how it just misses the point. 8BG does an amazing job when it comes computing documentaries and I find his content entertaining but with regard to this specific project he seems to have fallen victim to a sunken cost fallacy.
Absolutely. I hadn’t seen this before, but it sums up almost all my objections/disagreements with the directions they’ve gone.
My impression is that it kind of started from an impossible vision and then just spiraled from there through “compromise” decisions, without much consideration if the mix made much sense, especially since a lot of it is pushing up the price - which means the audience is additionally limited through that.
Yeah, that seems likely. I haven’t followed the project much, but I thought early on the whole reason behind starting a new design was to go for low price. Didn’t 8BG talk about working with the Foenix people for a while previously, but leaving for his own thing when it became clear what their price point was going to look like? (I swear I remember seeing this, but can’t find it in his old videos now.) That seems to not have panned out at all given that the all-in price for this once you add the power supply and case and keyboard to get a completish system is about the same as the similar base level of their machines.
Yes. When I want my hit of looking at a display from ye olde times, I fire up Vice, DosBox or some such and play a game. If I had the space and budget I might buy an actual ZX Spectrum, C64, C128, BBC Micro but as I get older I just hate the junk I’ve accumulated, so I probably wont.
Just for my own 2¢ worth…
I own 2 ZX Spectrums. One is my original unit from 1986, bought ex-demo so a poor student could afford it. The second a gift about 15Y ago.
I also own an Atari ST, Amiga, Archimedes, Sinclair QL, Amstrad PCW, a VAXstation, Macs with 68000, 68030, G3, G4 and G5 CPUs, and more. All were either free or very cheap, which is why I get these things: they were exotic expensive things I couldn’t afford when young, but a decade or two later were cheap or free. I am not a “collector” and I don’t buy and sell. I try to save what is worth saving.
I don’t and never did collect 1980s 8-bits: there were just too many of them, and most were frankly rubbish. A friend of mine did, amassed a more or less complete collection, and 20Y later realised he never ever played with them, sold the lot, and bought a used, first-model-year, Tesla Model S in cash.
Note for Americans: I’m European. We had lots of computers you’ve never heard of. Yours were very expensive here and thus poor value for money, so we didn’t buy many of them… and so were and are your cars and motorcycles. All of them are expensive, brash, loud, toys for the rich here.
(Comparably, I have 5 IBM Model M keyboards, but all were free: I saved them from being recycled or thrown away.)
Emulators have no feel. You don’t get to know what it was like to use a Spectrum by running a program on a different computer with a different keyboard, any more than you find out what it’s like to drive a Lamborghini by having one as your desktop wallpaper, or get the feeling of walking through a mountain rainforest by browsing a photo gallery, or of riding a racehorse by watching a lot of Tiktok videos.
These things are all still out there. They’re all still cheap if you look hard. Very few were rare or exclusive in their time, which is why I do not own a NeXTstation or a Lisa.
This is not an expensive hobby unless you’re a fool who actively looks for ways to get ripped off by strangers on the internet. Unfortunately, those are the loud folks on social media.
Obviously I can’t answer for the OP, but I can explain some of the nostalgia for the C64.
It was one of the first mainstream affordable home computers that wasn’t horribly compromised. For its time (1982) its spec of 64kB RAM was very generous, for its generation it had excellent sound and very good graphics. It was an unashamed games computer, with hardware sprites and colourful graphics, built-in joystick ports, a good keyboard, and a 40-column screen display. This was considerably better than its predecessor, the VIC-20, which had just 5kB of RAM and a 22 column display, and it was much cheaper than its late-1970s forerunner machines such as Commodore’s PET, the Apple II and the various models of TRS-80.
The C64 also supported floppy disk drives as standard.
What gets less coverage is that it was still a very compromised machine. It was expensive – $595 at launch – had an absolutely terrible BASIC inherited directly from the PET, which destroyed the language’s reputation, as I have blogged about to widespread criticism. The disk drives were expensive, tiny in capacity, and horribly horribly slow.
But it dominated the American market and I’ve found that many American home-computer owners are unaware that in other countries there were less-compromised machines for better prices, such as the Sinclair ZX Spectrum, which also sold in the millions of units, even excluding nearly 100 compatible clones, and the BBC Micro, also 6502-based, which also sold in huge numbers and whose successor model was the first affordable RISC-based computer, and spawned a line of CPUs that is the best-selling in history, outselling all x86 chips put together by 100x over, and whose original 1980s OS is still maintained and is now FOSS.
I’ve also seen the C64’s very weird 1985 successor machine, the C128 (twin processors, with a Z80 and – very slow – CP/M compatibility, thus totally failing to address the C64’s core market of video-gaming children) called “the last new 8-bit computer”, although in other countries there were new 8-bit machines being launched well into the 1990s.
Rival machines like the ZX Spectrum were usually cheaper, had better BASICs (not hard), more and faster storage, but worse keyboards and worse graphics and sound, to keep costs down.
All the same, the C64 was the first computer of many millions of people – the first model sold something like 6 million units – and so a lot of people are very nostalgic for it.
For those interested in the history, “Commodore: A Company on the Edge” is pretty good. Not the best writing, but fascinating factually. (I was a ZX Spectrum + Apple II kid, fwiw.)
IIRC, the reason for the old version of BASIC was that they had managed to negotiate a fixed-price rather than per-install license for the earlier version and didn’t want to pay more.
There are a couple of other reasons for the popularity of the C64:
The C64 disk drive was comically slow compared to the Disk II from Apple — some people consider the Disk II design the best exemplar of Woz’s genius, but it did have some interesting characteristics: eg. a bunch of C64s in a school computer lab could share a couple of drives.
Noted. More of a Sinclair and Acorn type myself but I will look out for it.
I did use a “network” of several PETs at school which shared a dual floppy drive. That was impressive stuff.
We had the same at my school. In Grade 9 (I think! it’’s been a while ;) ) we got a Superpet.
Hey, don’t dish my speccy, that rubber keyboard with 4, 5 symbols per key was unparalleled at the time! :)
On the serious note, to add to the answer you provided. One of the big things about c64, speccy and all these microcomputers was that it was approachable and learnable. I doubt anyone knew all the poke addresses by heart, but the concept was clear and you could program whatever you wanted with just maybe the reference manual, even as a kid. Try giving a reference manual to any modern computer to anyone today.
I know it’s a different thing, but for that era, when we played space Invaders it whatever was popular at the arcades back then, it was magical to be able to do it yourself.
“Dis”/“diss”?
I had one too. I stuck it in a LMT 68 FX2 – in fact, specifically, this is my keyboard I think. I was not a fan of the rubber keys. ;-)
I do sometimes wonder if it’d be possible to construct a semi-modern computer of comparable complexity.
I moved on to an Acorn Archimedes myself. A maxed-out Archie was still quite understandable. 4MB of RAM, ST-506 HDD, flat framebuffer display.
Could we do a C21 version? Maybe based around RISC-V, and Oberon as the OS/language?
“Dis”/“diss”?
I had one too. I stuck it in a LMT 68 FX2 – in fact, specifically, this is my keyboard I think. I was not a fan of the rubber keys. ;-)
I do sometimes wonder if it’d be possible to construct a semi-modern computer of comparable complexity.
I moved on to an Acorn Archimedes myself. A maxed-out Archie was still quite understandable. 4MB of RAM, ST-506 HDD, flat framebuffer display.
Could we do a C21 version? Maybe based around RISC-V, and Oberon as the OS/language?
Right, I don’t know if I misspelled that or got corrected.
At the time it didn’t know better keyboards and this one seemed awesome, compared to the Comodores that my friends had. The thing that was important to me was that you had practically entire basic right there in the open.
I think I saw some Kickstarter and similar efforts. But not just that, people are still looking for and finding working examples.
Like I mentioned above, the big thing for me was that all of basic was right there. If I didn’t know what the words meant, I could randomly play with it and stumble upon something interesting. Sometimes I even knew what it did :)
I thought about getting something similar for my kids. Just give it to them and see what they come up with. There are better suitable programming languages and environments for learning, but it’s not the same as knowing the poke address of the entire computer.
Since in hindsight I chose a bit of an inexpressive title, tl;dr:
breakelse
is a new keyword that jumps directly to theelse
block of anif
. With a bit of syntax sugar, this gives an alternative to conditional-access operators.You could have said so right away, not make me give up after a bunch of misdirections and tangents. I even scrolled down looking for what it actually is, saw the example, but wasn’t sure what exactly it did since I didn’t read all the text preceding it.
I mean, I liked that you go in-depth with the reasoning, but it felt like the tangents on furries and step-by-step dives deeper into the problem before you’re fully explaining it are just there to keep me on the page longer.
But it’s maybe just my preferences.
Back on topic though.
Anyway it’s neat that you do Neat. The breakelse keyword itself is also…neat, but I am not sure I like the semantics of it. Again it’s probably just my preference for simple things without a lot of magic and suspense, but going deeper into an if branch and then still possibly breaking off in a different direction… It just looks like overcomplicated algorithm to me. As @cpurdy says, I’d rather just use goto.
And another thing which is probably not relevant, is compiler backend. If it can’t trust the if to discard a whole false branch, it’s gonna be tricky to optimise this code. It looks like it would be trivially easy to put a tricky bit like this on a hotpath.
I do very little of lower-level programming these days and I’m probably out if the loop but this seems like an interesting thought experiment, but I don’t see it going much beyond that.
Thanks for your feedback, I added a tl;dr section!
Yeah,
?
is a very “high-level” tool: as a language feature, it’s basically “get rid of anything from that sumtype on the left that isn’t the actual value”. It gets rid of error types by returning them, too. But at least it’s explicit about it - every time a?
appears in the code you can mentally draw flow arrows going to the closing}
of the if block and the function. It’s the “clean this up for me, do whatever you think is right” operator.Performance-wise, note that conditional access has exactly the same problem but worse. With
?
, you branch once - to the end of theif
block. With?.
for conditional access, you have to keep branching every time you perform another chained operation on the value you’re accessing.I’m not sure what you mean with the “false branch” bit? The tag field of the sum type is an ordinary value and subject to constant propagation like anything else. The
.case(:else: breakelse)
branch will be removed if the compiler can prove that the sumtype will never be in:else
.I agree with this list of reasons, though my take is that the core reason Git is hard is because it solves a different problem than people think it does, and all of these are simply effects of that. And the problem it does solve is gnarly, one which it doesn’t look to simplify or abstract away but simply give the tools to properly deal with. I recommend everyone who uses Git to read the Git book which is very approachable and well made. Once you understand the why and how, Git actually becomes straightforward to work with.
The core problem Git gives the tools to solve is an offline-first, distributed consensus with divergence.
The reality is that Git is solving for a problem that most people don’t have, and especially newer developers aren’t familiar with. Git was designed to solve the problems that face large Open Source projects with many independent developers with little coordination and all styles of workflow. And at that, Git is incredibly effective. But if you are working on a small team on an isolated part of the project, you need none of those tools and something far simpler would suffice.
Once you appreciate the problem its trying to solve, and realize that the Git CLI is really just a set of tools to modify a merkle-tree, it becomes far less alien. Which is not to say that Git isn’t hard to learn – it is – but not because it has a lot of quirks or is really weird, its just solving a more complicated problem and requires up-front effort to properly understand what its doing and why.
Now whether this is good for one of the first tools many people have to learn to even get started with programming to involve advanced topics that are overwhelming is another question. I’m curious if there is another set of trade-offs which are better for this. Mercurial is supposedly a lot easier to learn, and I wonder if something like Pijul will also turn out to be easier to understand.
I don’t think these constraints explain git’s difficulty. For example, none of these really require supporting repos staying in detached HEAD state, or using jargon like “detached HEAD”.
Fair enough, I’m not here to defend Git’s… interesting CLI UX choices. Though I would argue that the whole detached HEAD state is to facilitate easy forking (sometimes the commit you want to fork on is not a named branch).
Git is a surprisingly thin layer on top of the merkle tree (barring compression/optimizations) and a lot of CLI weirdness is pretending that it’s not and that it’s a VCS.
I think you’re spot on in that it solves a problem few people or orgs have. Have have used subversion both professionally and privately for more than a decade. Now everyone uses git so I have to use that professionally but privately I keep using svn. It solves the problems I actually have. It’s metaphors and concepts maps nicely to my actual work.
It is weird because much of my professional work would fit just fine in svn as well. Much of that work is strictly hierarchically organized. There is no decentralized development like there is in open source projects.
I know that git is cool. But if I compare the hours spent in frustration over not knowing how to do X in git vs svn the amount of hours wasted is ridiculous.
I know people will likely laugh at me for using subversion…
I used SVN at work. I hated it. The only feature I liked about SVN was the ability to check out a subdirectory, and I don’t even miss it that much with
git
with my own stuff. Everything else about SVN was a pain. Working from home and the VPN goes down? No more check-ins for a while. Made a bit too many changes? No interactive staging. And too many times I commited a file by mistake into SVN. Branching under SVN was too much overhead, and forget easy merges.With that said, SVN did fit work much better—centralized, micromanaged, hierarchically organized (in fact, those are the very reasons SVN was originally picked, and at that time; using
git
should be a fireable offense (per the person responsible for choosing SVN)). Switching togit
was painful (was, because I left before it was finished), with management pushing the use ofgit
submodules (which don’t work at all in my opinion).I used CVS back in the day, and it made sense. I use Git today, and it also (mostly) makes sense. SVN, though, threw me for a loop, and I never got a handle into just how it works, and how it represents history and branches. Even using a GUI (SmartSVN) didn’t help.
In my early days of using git privately , I asked some senior at work about how to use branches in svn. They told me to just forget it.
svn branches were very easy to use until you starting merging things. Nothing was kept track of so you’d merge a branch, add to it, and then have to go check your notes to see what you had already merged. Total nightmare.
That’ss why I was told to forget it, IIRC.
I started with svn early on, and I didn’t think much about it. Then I had used git for some reason (needed it for work or some side project, I don’t remember any more). I don’t remember the reasons, but I remember that it felt so much better then svn. I think I even started using it for my svn work project for a while before I left.
One of the reasons I do remember is that it was offline-capable. Well, not offline, but local-first. I could do whatever locally, have a million little branches and experiments, and mix and match them in any way I wanted. And I couldn’t do most of these things with svn (or didn’t know how).
But I also think svn had some strong sides, and I don’t think people using it are laughable.
Agreed. I used SVN at work around ten years ago - bliss. Just worked. Always. Fitted so seamlessly into the workflow I hardly noticed it. Git is an obstacle. An endless series of obstacles.
My axe to grind on this is when there was this course for teaching programming to 8 year olds, so to keep it simple they taught with Python, and Git.
Git is pretty easy to use if you are the only person working on a repo (as i imagine most people learning to program are). It’s only when you have to deal with branching and merging that it becomes painful.
Neither of those is starting point without footguns. Poor kids.
I think your list misses an item, which should probably be the first. Scale. Like, Git just skips basic table-stakes critical functionality in its sync because at scale of Linux kernel nobody would be able to afford to do it this way. Like, really, you cannot opt to sync reflog? Its crash-consistency is written to be correct on a specific filesystem, because performance is that critical at the desired scale. This filesystem effectively no longer exist (ext3 slightly changed the behaviours in some edge cases) so Git is not hard-powerdown-safe anywhere anymore, by the way.
I know Git’s model, I know Monotone’s model (which has more moving parts overall but is outright better due to layering, unless you are optimising performance at expense of both safety and clarity). No, it does not make Git easy because I still need to work around its limitations, and script around its annoying assumptions. But also to figure out its flags in case I want to do anything I don’t do every day, this doesn’t help.
Funny enough, I feel like all of the recent issues with Git have been due to the fact that it doesn’t scale particularly well given modern workloads (hence the creation of Git-LFS which to this day I have mixed feelings about). But I agree that Git is heavily optimized for performance at a particular scale (ie. Linux kernel size). I have heard about Git’s lack of crash-consistency (I wish I could find it, but there’s an article out there which basically says only sqlite gets it right for local DBs [Git is really just a local database]).
One aspect I find particularly fun about VCS is the tradeoffs between performance & space – there is really no one-size-fits-all. Git uses snapshot as its primitive, which is space-inefficient if done naively (hence it uses packfiles which are actually diffs under-the-hood for compression) and Mercurial uses diffs as its primitive, which has poor performance (hence why its uses snapshots under-the-hood every few diffs for performance). Then you have Darcs and Pijul which both uses patches (which as I understand it is a formalization of diffs) where Darcs has poor performance and Pijul has fixed those issues, and both, in theory, allow for greater scale.
In any case, yeah Scale is a big topic for VCS and largely the answer seems to be it depends. Algorithmically I’ve heard it said that Git has a worse big O but a better constant, and Mercurial has a better big O but worse constant. Performance curves intersect at some point.
Git-LFS is more about Git being source control, not really general version control. A different coordinate of scale, so to say.
Local bespoke VFS-es for Git are about source control, but they arise because people want kind of Git but beyond the scale where local copies are practical. Of course, Subversion has supported narrow clones since forever but these make no sense for Linux kernel build process, so no support in Git.
Yes, using SQLite is one of the many many many reasons I still use Monotone for my personal stuff. Fossil, naturally (being written by the author of SQLite), also uses SQLite. Pijul tries to implement its own copy-on-write database — available as a generic crate, I don’t think it got much external stress-testing yet. Funnily enough, libgit2 seems to allow making a git client with SQLite (or PostgreSQL if one so desires) storage for the repository, but I don’t know if anyone tried to push that for personal use. I have personally had git repositories busted beyond repair on a hard powerdown.
A pessimistic answer to scale is that Git has networking-effect lock-in over the niches it doesn’t really care about serving. And I guess it’s uncomfortable to take the defeatist-sounding path even if it’s the right direction. This prevents an actual solution from arising. Shallow checkouts, narrow checkouts, true-last-change tracking without keeping a pristine copy (for large files — still beats reliability of FS timestamp based stuff like Unison, and with checksums and a few full clones you can recover if you need to), first-class «extract this subproject from a subtree into an independently clonable sub-repo»…
I completely agree about the network-effect lock-in. For better or worse, Git is here to stay. Society is much like evolution — it doesn’t necessarily optimize for best but just good enough.
I will say though in Git’s defense, very few companies out-scale Git. I’ve worked with a 50M line codebase and Git had no trouble with it. What I think is needed is tooling to make working with multi-repos easier, because I think with better tooling they solve a lot of Git’s issues (ie. narrow clones).
Working with multi-repos as it currently stands is pretty awful unfortunately.
Python’s slow for some stuff, fast for other stuff. I noticed that processing a very large JSON file I happen to have to deal with was significantly faster with Python than Rust+serde_json - even in a release build.
This surprises me because when I had to JSON encode a file with around 5,000 rows using Django + DRF, CPU was a significant bottleneck.
I would have thought that python would be losing massively to native languages on CPU-intensive tasks, like node is. What am I missing?
Python libraries don’t have to be written in Python – they can be Python wrappers around other languages like C. IIRC the standard-library
json
module is a wholesale adoption of the third-partysimplejson
package, which provided both Python and C implementations. And there are other specialized third-party JSON encoder/decoder packages out there likeorjson
which are built for speed and use implementations in C.I am surprised. I’d expect python Jason parsing to be reasonably fast but I would have expected creating all the python dictionaries and lists to just be mallocing in a tight loop and slow. Can I ask if you were parsing into a rust struct or having rust create some sort of equivalent nested map/list structure?
That quote from the first paragraph was fun to read.
I didn’t think you’d scale up to cloud services. I mean, if you do them by the various best practices, they’re pretty simple and straightforward. At least in my experience, both using and building them. Systems programming or even desktop apps seems much harder to me and I would need to “scale up” to do it right.
The article is interesting, but omg that linked article on one week of bugs got me rolling on the floor laughing.
As I read the article it occurred to me that none of the excuses given really let stale-bots off the hook. In every case, leaving issues unread and/or unhandled is IMO less rude than having a bot close them. It’s the difference between saying “I can’t get to this right now” and saying “Your contribution has no value to us now, and never will.”
“I haven’t had time to look at this” tells you “your contribution has no value”?
Sorry if I wasn’t clear. To clarify, leaving a PR unread says “I can’t get to this right now” but doesn’t make any judgement about the PR itself, and leaves open the possibility that the maintainer (or some future maintainer) will look at it. Having a bot close the PR says “this is worthless to us and we will never look at it.”
I assumed we’re talking about issues, not PRs, but I still disagree.
Bot closing the PR did not say “worthless”. It just said, “stale”. The PR (or an issue) going “stale” is almost literaly the function of the person “not having the time to do this right now”.
In certain cases, not responding to someone’s work can be worse. “Great, I opened a PR months ago and they’re ignoring me!” vs “I opened a PR, but since nobody had time for it, it got marked as stale.”.
I think the underlying issue is often people attributing things to events that are not necessarily there. Throughout the whole comment thread here on lobsters, people are expressing differing opinions. But look at the observable facts:
A thousand people will interpret it a thousand different ways. Should I report a bug if I find it? Am I allowed to ignore it? If I do report a bug, am I entitled an answer? Or at least a look? Or 5 minutes of that maintainer’s time? We just assume, since the project is public, that it is also a community project, and there is an obligation on the maintainer to maintain.
Personally I remember the “good” old times of BOFHs and every question you ask on mailing lists being answered with RTFM. I’m happy that it’s not like that any more - for most things that I need, I can probably ask a question, someone’ll answer me. But I’m still keeping that mindset of old ways of open source - if I have a problem, I am in charge of fixing it. And I’ll send the patch back to source, or perhaps a PR or a comment, but I don’t have expectations that anything will be done about it.
But I understand this isn’t how most people expect things to work.
Basic etiquette.
I think, with time, people will start to recognize that stale-botting is just putting a robotic coat of paint on the same boorish disrespect for others that is on full display in places like Twitter/X and YouTube comments.
If you don’t want to follow the implicit norms of the greater community, state so up-front. You don’t need to write a fancy legal document… just have the backbone and common decency to tell people what to expect of you, rather than ghosting them.
All of the perspectives raised carry an undercurrent of “I’m so special that I don’t have to engage in common decency”. Heck, now that I think about it, I think part of the reason stale-botting irks me so much is that, on some level, it feels like the decision to use a stale bot is passive-aggressive behaviour.
I dunno, I believe users who treat open source projects like they’re a product they’re paying money for, and who demand to be treated as paying customers, are the ones who are boorish and disrespectful.
Better to tell them that than to have a bot close their issue
Not everyone has the mental energy to do that, though.
Which is exactly the passive aggressive behavior the thread starter is referring to. I would say if you don’t have the mental energy, maybe let the issues sit open until you do.
Why is it passive aggressive? They have have a project, they are happy to let others use it, but they aren’t (yet) expecting to follow everyone else’s needs. Let them deal with their project how they wish, without having to apologize to everyone for everything in advance.
Because human communication has norms. If I spit in your face. It doesn’t matter if I think it’s not a grave insult. What matters is what the cultural norms we exist in consider it to be.
Yes, that’s my point exactly. Why did the commenter above think that this behavior is passive-aggressive? Because I don’t, and they apparently do.
So yes, human communication has norms, but as far as I know, one of them is “my house, my rules”. Why should users expect that some project runs issues just the way they, the users like it? Why can’t they just abide by the rules of the house? The maintainers could very well be passive-aggressive, but maybe they’re not, maybe they just don’t have a lot of time or interest in chasing bugs.
That’s what I’m referring to - we cannot assume someone has a certain state of mind (passive aggression) based only on the fact that they employ a stalebot. We would need other source of data. Stalebot is most certainly a norm that some projects use - but not all - so why can we say it’s wrong to have them?
I’m the person who said it originally and I said it feels like passive-aggressive behaviour and that’s why it bothers me so much.
Because you’re publishing your project into a public space with defined expectations. Psychologically, it’s closer to renting a space in a shopping mall than a private home, and people are upset at “how rude the staff are to window-shoppers”.
If you don’t want people wandering into your rented commercial space with expectations of your behaviour, then post proper signage. Just because the “mall” has effectively no size limits and offers space for free doesn’t change that.
Again, this is very situation-specific. I personally had had office space rented, when I was freelancing a lot. It was a small space next to a cafe. I didn’t mind people smoking on the caffee’s terrace, but I did mind if they did on mine. (I really didn’t. But it works for the example).
Now, I can expect that people will come in my commercial space because it’s next to that cafe. And I can also expect that some of them will be smoking a cigarette. Which is also fine. But despite all that, if I ask them to put the cigarette out, or if I stale-bot their stale issue, I don’t think I’m aggressive. I just think I have my rules, and in my (rented, commecial-space) back yard, I should not have to feel pressured to explain the reasoning behind those rules.
Once more: the bottom line is that the passive-agressive feeling is very personalized. I may really like some behavior where you might not, and we’ll feel different kinds of pressure when witnessing said behavior. That’s why I think that the passive-agressive comment can’t really stand as it is overly generic.
The comment about it feeling passive-aggressive was secondary anyway. The primary point was that it was an active violation of the etiquette of the public space they set up shop in.
The funny thing about my mall example is that you’re responding to draft #2, where I now think draft #1 addressed your point better. Originally, I couched it in terms of setting up a library in a mall unit and then sending out a “shush bot” to patrol the segment of hallway in front of it rather than setting up proper sound insulation.
It’d just make people want to kick the bot and resent its owner.
Yeah, the passive-agressive stuff is not really relevant. I still disagree your point though. Why is my issue tracker, for my project, public space? Why do passers-by get to dictate my rules?
I don’t think we’ll agree on this. You seem to think that issues should not be automatically cleaned up by stale-bots. I disagree - not in that that I think that they should be automatically cleaned up, but in that that the choice is on maintainers and maintainers alone, not on the users of the project.
But let’s agree to disagree here and move on.
Do you mind if I ask a question or two to try to get a sense of your position on that before we part ways?
I’d like to get a sense for what you believe is reasonable to allow the project maintainers to decide. For example, I doubt you’d argue that GitHub should be prohibited from kicking users off their platform for gross abuses (eg. using a bug tracker to coordinate treasonous or genocidal activities), so there has to be a line you draw and the question is where you draw it.
(eg. Which side of the line is it on for you if maintainers choose to set a “norm” where they verbally abuse people who report bugs?)
Oh, but those two aren’t exactly the same, deciding whether to use a stale bot vs. being abusive, it’s pretty much different things we’re talking about.
To be upfront, I believe it’s absolutely reasonable for the platforms to deal with abuse. I also think it should be required, but it’s hard to set the limits of “what is enough”. But that topic is moderation, we’re talking stalebots here.
If you’re asking me if I would use a stale-bot, probably yes. I frequently have some inner-source or closed source projects, so it’s a bit different, but I would still use a stale-bot if I didn’t have someone who’s already triaging the backlog. That is not applicable here, though, since I don’t think I ever had a project with 300 open issues. Or if I did, they would be neatly “organized” into epics and stories and stuff, and out of my “current sprint” or whatever, and I would actually be paid and reserve the time to triage them.
For open source, I probably would like some automatiuon if I didn’t have the manpower.
Do you think, what’s my position in general? After a small consideration, I believe on most of these things I am quite liberal.
Each person gets to decide for themselves, as long as they don’t impinge on other people’s freedoms or break public rules. It may be sucky, and we have to work on that, but as long as it’s “allowed” by the general rules, we can’t and should not force anyone to do anything a particular way.
But even more, I don’t like “having a position” at all, mostly. I like having observations on how things work. You know how it is in software: for most questions, an experienced developer will answer majority of questions with “it depends”.
It is very often a question of a trade-off, not of “should we or should we not”.
In this case (stalebot), I think it is absolutely okay for maintainers to decide to use a stale bot. 100% their choice. Even if it was something that is as widely used as linux kernel - they are the maintainers, and they have to maintain the thing, so they get to pick what they want to do about stale issues.
People picking stalebots are probably not doing so much triage. I’m thinking e.g. Angular, where I frequently found open issues on my topic that are untouched for a long time. Sucks, I have no clue what’s with the issue, why is it still open, is it being worked on or what. (To be honest, I haven’t visited Angular GitHub repo in a while now).
People leaving open tickets probably do a bit more triage, but the already mentioned Angular example shows it’s not always the case.
As an end user in such a project, I am stuck either way. Either the issue is stale-closed, or it’s open but untouched, and I have no clue what to expect. Well, with a stale-closed issue I can probably expect nothing, with the open-but-untouched issue, I probably have nothing to look forward to as well, but there’s some false hope that it may be tackled sometime.
So, for me as a user it is usually irrelevant about what the maintainers decided.
Is it okay for Microsoft/GitHub to say, “no stalebots allowed”? Absolutely. Their company, their rules. We do have a choice (and I mostly do choose to go elsewhere). So now if maintainers want a stale bot, they have to go to GitLab or something.
Again, all perfectly within the bounds.
I don’t think my thoughts on any of that matter, though, they are just observations. Again, this is more a trade-off, then a final position - what is more valuable to me, at this time, vs. maintainer, vs Microsoft.
I don’t know if I answered your somewhat-open-ended question well enough, feel free to ask more.
I think you answered it fairly well and I’m not opposed to stale bots in general. In fact, I think it’s probably a good idea to have bots that tag things as “stale” for easier filtering for or against. It’s just the closing of issues that is the problem.
Likewise, I’m much less opposed to stale-botting with closure if the maintainers post clear notice so people know what they’re getting into before they invest their time, but I’ve said that before, so let’s not start that up again.
Thanks for the reply.
“my house, my rules”
A house is a poor framing for open source…it lacks the collaborative element.
Or… maybe it’s apt after all, but only if we think of it like a community barn-raising. You need a barn, so all your neighbors help you raise it. In turn, you’re expected to help them out when their time comes.
And if one of your neighbors barn gets struck by lightning and burns down, you’d have to be a terrible person not to offer to let them store some of their stuff in your barn until a new one is built.
No, I didn’t mean for the house to be the analogy, I meant, it’s their project, not a community project.
I get that a random third-party can get interested and involved and challenge the rules, but it’s still those project’s rules.
Look at the extremes. If someone came, and raised a pull-request on your latest zig hobby project that rewrites everything in Rust, or PHP, or whatever else, you would probably reject it. You made the technology choice for your own reasons. Perhaps to learn zig, perhaps for the fancy comptime. Rejecting that PR is pretty straight-forward, right? They can offer reasons (“it’s better”, “it’s faster”, “it’s slower” or similar). They can open a discussion. But with or without the discussion or reasons, if you simply reject the pull request with just a note “We decided to do this in Zig”, it would not be a problem, right? It was your technological choice.
I see it the same with the stale-bot choice. It was your project-management methodology choice. You could accept discussions, or be willing to change your potentially-inefficient project management ways, but if you don’t it’s your choice, since it is your project.
I know the two decisions are not in the same area (tech choice vs project management methodology choice). But those are the chosen ones for that particular project anyway.
That is what I meant by “my house, my rules”.
Does my reasoning make sense to you? I mean, I get that you still may not agree, but can you at least accept my point of view on why we shouldn’t expect people to stop using stale-bots, even if it’s inferior?
I understand your framing, yes. I just don’t think it’s correct to ignore the community aspects. And a PR about rewriting in another language is a bit of a straw-man; it’s an exaggerated hypothetical scenario that isn’t the real problem that arises.
Consider it from the perspective of copyright instead. My contributions to another project are under my copyright. It’s standard now to force contributors to assign their legal rights away (via CLAs), but the copyright still remains mine. Without CLAs, every project would be forced to treat their contributors as full participants. CLAs distort that.
Regardless, a contribution of code and assignation of legal rights is a gift, and warrants social consideration, if not legal consideration. It can be mostly, but not solely, your project.
Yes, I’m aware that that was an extreme example, I wrote so. I’m just saying that it’s very clear that there is some boundary where it is okay for me to have my rules for my project, and regardless of the quality, size, value of your contribution, I do not have to accept that contribution. Now, if we can agree to that - that some things are under my control, I just think that most people will have different opinions on what things are mine alone.
I think it’s acceptable for maintainers to decide on the stale-bot rule. I may think that in some cases it’s wrong, in some others it’s the correct way, but in no case do I think that I have any say in their decision. I may provide input, or express my wishes or preferences. I may try to persuade them, or pay them or whatever, to take that decision one or another way.
But whatever they decide, I don’t think it should be considered rude, anti-social, not-in-spirit-of-open-source or any of the things this entire comment section is mentioning - it’s just a simple project management decision, in my eyes, and it is theirs to make.
Taking the time to write up and submit a good bug report (potentially including investigation of relevant code, surrounding behaviours etc) is not treating a project the same as a product you’re paying money for. Having such bug reports closed by a bot is pretty annoying. I don’t want to waste my time submitting bug reports if the maintainers aren’t interested in actually fixing bugs, but this has happened on any number of ocassions (not always with stale bots involved, sometimes the bug just sits in the bugtracker forever) with various projects.
Sure, there are lousy users with excessive expectations and demands as well. That doesn’t justify ignoring quality bug reports. If you don’t want bug reports, don’t have a public bug tracker, or at least make it clear exactly what kind of bug reports you do and don’t want and what sort of expectations the submitter should have. As a maintainer you don’t have the right to waste people’s time either.
Let’s say that in this situation, the project doesn’t use a stale bot, but everything else remains the same. Your finely crafted high-quality issue goes unacknowledged and un-actioned indefinitely. The end result is the same: you feel rejected and disrespected.
Not using a stale bot is not going to make a maintainer action your issue faster.
Ah, but having a huge list of unanswered issues is a red flag! You would not contribute time and effort to such a project. And that’s true. So what you need to do now, before submitting an issue, is to check the list of Closed issues, and eyeball if they’re closed by a stale bot or not. This is a tiny bit of extra work but less than submitting a good bug report and then nothing happening.
No, if my bug goes perpetually un-answered, I assume many things (maybe the developer is overworked. Maybe, like me, they have ADHD. etc.) but I don’t feel actively rejected and disrespected.
If they use a stale-bot, it feels actively user-hostile. It says “this person doesn’t even have the decency to leave my report to languish”.
This is pushing the responsibility on to the wrong person. It’s easy enough not to think to check through issues to see if they’re being dealt with appropriately; especially so if there’s a detailed issue template (for example) that makes it look the project takes bug reports seriously and there are no obvious “website is only held together with sticky-tape” signs that hint a project isn’t really maintained. I don’t think to check for auto-closed bugs before opening a bug report (but thanks for the suggestion, it’s something I might try to do in future); the tendency for projects - even large, supposedly maintained projects - to simply ignore bug reports (or auto-close them) isn’t something I expected until bad experience taught me otherwise, with however much time wasted in the meantime.
The result: I tend to put less effort into bug reports now, if there’s any indication at all that there might not be any interest from maintainers. If the maintainer responds at all, I’m always happy to do more legwork, but speaking as a maintainer myself, this still isn’t optimal.
On the other hand it’s trivial for a maintainer to simply shut the issue tracker down, make it private, or at least stick a note in the issue template (or even the project README) that the issue tracker is for maintainer use only (or whatever), without risk of wasting anyone’s time at all and without generally bringing down the open-source experience. I would do this; I’m not asking anyone to do anything I wouldn’t do myself, and I think it’s a better outcome than either abandoning the issue tracker or auto-closing bugs in it. But if it has to be one of those, at least just abandoning the tracker makes the state of the project clear.
It’s later addressed in the post. Sure it’s “basic etiquette” once you know it. But you may not even be aware of it. The use of stale bots is not necessarily evil. These are just some unspoken rules we’ve come to agree on. And I think maintainers should be educated rather than shat on here.
Also lol @ the flags on this post. I searched for it before and didn’t see it already posted. And I have no idea how “no stalebots” could be on-topic but “yes stalebots” off-topic…
https://lobste.rs/about says
Several times, including as I read this comments section, I have considered writing a bot that automatically keeps issues open for me in the most aggressive projects. It could have rules about this - e.g. maybe it would refuse to work if the stale configuration only kicked in when a label (like “needs info”) had been applied. Stuff like that.
It feels disrespectful to maintainers’ choices, even if I disagree with that choice, which is why I probably won’t do it. But man do stale bots feel disrespectful of my time too, and they also feel frankly like the result of a shoddy engineering mentality.
I think you’re not the target audience. You’ll open an issue, include details, send info, talk to devs. I’ve seen large open source projects having hundreds of issues, most of which are not issues but requests for help. The answers to which can be found in the docs, often.
The conflict here is that you think the issues serve you. And the developers think it’s for them, to plan their project.
Issues are often used both as “things to work on” as well as “bug reports”.
Separating these somehow would let the devs for the work, and reporters report.
Void does use a stale bot. Unless something is assigned to a maintainer, or is actively being worked, it inflates our counts of issues with weird one-offs that nobody including the reporter cares to diagnose. This hides active issues, or issues we consider important. The stale bot also serves as a reminder to maintainers to consider merges that were e.g. stalled waiting for feedback.
Our numbers being bad is demotivating. We don’t want to bulk-close issues, but “stale-ness” is a useful metric for what should dominate maintainer’s free time. Meanwhile it doesn’t delete the work, so it can be revived should it prove important (a quick search will pull up such issues/PRs).
I do not believe we “lock” issues or PRs after they are closed for being stale. We do not wish to silence people, simply to put things about which nobody cares out of mind.
Why don’t you close these “weird one-offs” manually?
Again, maintainer attention. Free time is limited.
Becae clicking one button takes so much time?
Don’t be obtuse. It’s reading the issues and coming to a (possibly consensus) decision that takes time, attention, and motivation.
Ignoring the issue is also a decision.
The problem is you have to ignore it every time you review open issues. And every time you spend a bit of time on that. Even if it’s only a second, such issues pile up and the total time lost accumulates.
If you want to solve it, then it’s good to be reminded about it. If you don’t want to solve it, then close it.
What if I don’t want to solve it but I’m on a team. Maybe someone else will solve it. Or, if nobody shows any interest in 3 months, it falls off
That shows a lack of communication within the team. You should have a unified vision of what you want your project to look like, which determines whether an issue is relevant.
Why do you think that using a stale bot means there is a lack of communication?
Not knowing whether a bug report is relevant for your project shows a lack of communication.
Yes, you said that. I’m asking why do you think it. Does it show a lack of communication, or lack of time on behalf of the maintainer?
I agree that getting an issue automatically closed off as stale sucks. Like getting a canned reply from a job application - couldn’t they at least tell you why they didn’t hire you?
The alternative discussed here is ignoring the user. Like, the issue stands there forever open. To me, it looks like nobody cares. To me, that is the lack of communication. Like applying to a job and never getting anything back. Neither of these are good to me, because they give me nothing personal. But in one case, I’m ignored, in another, I at least got some closure and know my thing is not important any more to anyone. I can either give up hope, or try to reopen or something.
But I understand that this is my own perspective. That is why I am asking, why do you prefer getting ignored over getting the closure in the sense of “we never got around to your bug report. give up hope or take initiative.”
Of course, ignoring the issue altogether is not ideal, but I’d argue it’s still better than having a bot close it – at least other users who have the same problem can see that it has been reported and may be fixed at some point when the developers have more time. But you could make a tag saying something like “don’t have time to fix right now” and put it on such issues. That would also help the “open issues are cluttering my issues list” problem because you could just filter out this tag. And of course, if you know that you’ll never have time for fixing the issue, just close it (you can also make a tag for that).
Hmm. That’s may partially explain your “lack of communication” point of view to me. You would expect that someone would just never look at all the stale issues if the bot closes them, whereas if the maintainer does it manually, it is assumed they considered it.
I can understand that. But just as you argue that “it’s still better than having a bot close it”, I can also argue that sometimes the people have no time, or desire, or a habbit, to do this. Perhaps they do the triage, perhaps they don’t, some will simply find it better for them to have the bot “stale” the issue if it goes 3 months without activity.
But it’s not a perfect situation in any case. I just don’t think it’s a lack-of-communication problem, but rather lack-of-time or lack-of-resources problem. Or more often, “conscious choice on where to focus ones’ energy”. Not that I think it’s a better or worse way to do things, I just think it’s a valid choice.
To be clear, my comment on communication skills was just a response to you saying that when working in a team, you might not know whether an issue is relevant. It doesn’t apply in other situations.
No,I meant by that that you commented to someone else, that if nobody responds to an issue within 3 months, it gets autoclosed, shows a lack of communication within the team. That’s why I asked you think that using a stale bot means there is a lack of communication.
Personally I don’t think one way or another, I think these two points are not related. Whether or not a team uses a stale bot, and whether or not there’s a lack of communication are separate points, they don’t seem to be co-dependent to me (unless of course there’s a deeper reasoning).
Communication takes work.
Go look at the organization. We have a stale bot, but how recently was the last “closed for stale” issue? The stale bot is for times when nobody actually wants to do the work and the work might be in scope, if someone anywhere was able to prioritize it.
A ports tree/package repository is quite a bit different from general projects.
We have a lot of PRs from a lot of people for thousands of “sub-projects”.
Sometimes, sure. But if it is your default policy then it takes very little effort to ignore a new issue.
If you’re putting in the effort to properly triage the issue then, sure, it’s not much more effort to tag the issue up or leave a short comment, but triaging is work and leaving public comments is social risk: you may embarrass yourself by saying something wrong and/or may invite rude or tiring comments from people.
Related: I’m done with this thread. You haven’t been rude, but I am no longer interested in talking about it.
I couldn’t agree more! Stale bots are just the ostrich approach of sticking your head in the sand.
… if the project uses issues as a bug database. Perhaps the devs are using issues only for their own, internal project planning. Yes, They’re happy to share their work. They’re happy to sometimes pick up a few details, fix a bug, work on things that you report that align with their interest. Other times they will ignore the issue and let it get automatically closed.
I seriously think this is more a problem of people in this discussion assuming that issues are only used for bug reports. And that the developers want to cater to community. In reality, it’s often not the case. The devs will take input in the form of big reports, but not let those things guide their development roadmap.
This (and most of the comments so far) seems to neglect one very important detail:
No two readers are the same.
I can think of cases where someone would prefer either the left or the right:
Left:
Right:
In any case, no matter how you write the code, someone will be unhappy with it. You can’t cater to everyone, yet everyone has legitimate reasons for preferring one or the other (depending on their personality OR their current task).
I can think of a few more important details neglected.
For one, how stable is this code? Is
heatOven
something that’s likely to be changed? Maybe to tweak the temperature, maybe the company frequently changes ovens, whatever. If that bit is gonna be fiddled with a lot, it probably makes sense to isolate it. If it’s actually stable, then meh, inline is probably fine.That’s a good architectural reason to split the code though, it is not about readability.
But from that perspective, here’s another thing - how big is this code? It’s easy to read things inline like in this example when both of them together fit into the screen. But business-grade code is frequently going to be much thicker. And yes, baking pizza is simple in this example.
But what if he had to go fetch the flour first? And they’re out of whole wheat, can we use another type? Oh, no, wait, is the order gluten-free, which flour then? Oh, no, the flour shelf empty. I need to go fetch another one from the long-term storage. Do I put the order aside, or is the storage far away and I’ll have to throw away other ingredients?
And that’s just the first ingredient of that pizza. What about the toppings? What is a “proper heating temperature”?
In my eyes, it’d be much more readable to
getFlour()
inline, and deal with the fiddly stuff like business logic and infrastructural logic and retries and all the other fun stuff somewhere else.This is where the first part (architectural stability) comes back into play. Of course I can make the whole pizza as a one long Pascal procedure. But am I going to be able to read the whole thing next summer when I’m called back from vacation because the CEO was showing off to his friends and wanted to have a goldfish pizza with crust plated in ivory and the thing didn’t work?
It’s funny that you say that because one of my references on this topic is this post by Martin Sústrik where he argues that inlining is useful precisely when context can change and you feel that the problem may become more convoluted (because it is hard to design abstractions that will remain valid in such cases).
I am the author of the blog post, and I think this is spot on. It probably just turns out I am the first guy here. I tend to run software in production and be paged to fix bugs (quickly) in my or other people’s code. I wrote here that one of my core principles is:
I do think in high-level systems too but I do not need every single detail to be extracted to a function for this, I can do it in my mind. And having a “shallow stack” (fewer well-chosen abstractions) makes that easier for me too.
Sure, but the common denominator is bigger than we give it credit for. Though it would be easy to argue that even code locality depends on the use case: debugging vs answering a business question is a very good one.