I think the link is broken? This one worked for me: https://github.com/npmaile/blog/blob/main/posts/1.%20flashbang.md
Thank you. I changed the file name shortly after posting and forgot that that would break the link
I guess I’ll clean it up later
always exciting to hear about pijul! i think the biggest blocker to me using it more widely is that it isn’t supported as a way to version control Nix flakes, which requires explicit support in Nix. once I have a bit more free time and motivation, I hope to contribute a fetchpijul
primitive to Nix to start building that.
This would be nice indeed. There’s a planned feature aiming at making a hybrid system between patches and snapshots. You would have the ease of use of patches (commutativity etc) plus the history navigation of snapshots.
This is relatively easy to do, all the formats are ready for it, but I haven’t found the time to do it yet (nor have I found the money to justify my time on this).
Different Bash implementations have subtle differences that make it hard to eliminate inconsistencies and edge cases—and it’s hard to discover those in the first place because Bash is all but untestable.
Am I being gaslit here, or where are the other bash impls than https://www.gnu.org/software/bash/ ?
I would guess that part is talking about disparate platform and version number combinations of Bash, unless I am also uninformed on some other indie Bash impl
I agree that it’s speaking a bit loosely about platform/version differences, plus utility/env differences.
For example, here’s a dumb edge-case we hit in the official installer around a bug in the bash that ships with macOS: https://github.com/NixOS/nix/pull/5951
Another recent example was that the installer was using rsync for ~idempotently copying the store seed into the nix store. Debian, iirc, lacked rsync, so someone changed it to a cp command. But the flags weren’t supporting an idempotent copy, so a lot of people started getting hard errors during partial reinstalls that would’ve otherwise worked.
We’ve also run into trouble recently because the platforms we were supporting were all using GNU diffutils. I took advantage of some of its flags for formatting less-programmer-centric diffs for some state-curing actions, and then macOS Ventura promptly dropped gnu diffutils for their own homegrown version without these flags.
Just different versions. macOS ships with Bash 3.0, which is 10+ years old and has subtle bugs around empty arrays and other areas.
I have become one of those boring people whose first thought is “why not just use Nix” as a response to half of the technical blog posts I see. The existence of all those other projects, package managers, runtime managers, container tooling, tools for sharable reproducible development environment, much of Docker, and much more, when taken together all point to the need for Nix (and the need for Nix to reach a critical point of ease of adoption).
The Nix community has been aware of the DX pitfalls that prevented developers to be happy with the tooling.
I’ve made https://devenv.sh to address these and make it easy for newcomers, let me know if you hit any issues.
+1 for devenv. It’s boss. The only thing I think it’s truly “missing” at the moment is package versioning (correct me if I’m wrong).
it doesn’t appear to support using different versions of runtimes—which is the entire point of asdf/rtx in the first place. I’m not sure why I would use devenv over homebrew if I didn’t care about versions.
I think the idea is a devenv
per-project, not globally, like a .tool-versions
file; as you say, it’d be a bit of a non-sequitor otherwise
Primarily the bad taste the lacking UX and documentation leaves in people’s mouths. Python development is especially crap with Nix, even if you’re using dream2nix or mach-nix or poetry2nix or whatever2nix. Technically, Nix is awesome and this is the kind of thing the Nix package manager excels at.
I’ve found mach-nix
[1] very usable! I’m not primarily working with Python though.
because the documentation is horrible, the UX is bad, and it doesn’t like it when you try to do something outside of it’s bounds. It also solves different problems from containers (well, there’s some overlap, but a networking model is not part of Nix).
I’ll adopt Nix the moment that the cure hurts less than the disease. If someone gave Nix the same UX as Rtx or Asdf, people would flock to it. Instead it has the UX of a tire fire (but with more copy-paste from people’s blogs) and a street team that mostly alienates 3/4 of the nerds who encounter it.
Curious did you try https://denvenv.sh yet?
https://devenv.sh for those clicking…
No, thanks for the link! This looks like a real usability improvement. I don’t know if I am in the target audience, but I could see this being very useful for reproducing env in QA.
It’s like using kubernetes. Apparently it’s great if you can figure out how to use it.
I’ve given up twice trying to use nix personally. I think it’s just for people smarter than me.
Heh, that’s a good counterpoint. I would say, unlike with k8s I get very immediate benefits from even superficial nix use. (I do use k8s too, but only because I work with people who know it very well.) I assure you (honest) I’m not very smart. I just focus on using nix in the simplest way possible that gives me daily value, and add a little something every few months or so. I still have a long way to go!
The How it works
section of the rtx
README sounds very much like nix + direnv! (And of course, I’m not saying there’s no place for tools like rtx
, looks like a great project!)
Nix is another solution that treats the symptoms but not the disease. I used asdf (and now rtx) mainly for Python because somehow Python devs find it acceptable to break backwards compatibility between minor versions. Therefore, some libraries define min and max supported interpreter version.
Still, I’d rather use rtx than nix. Better documentation and UX than anything Nix community created since 2003.
Sure. It’s good that a better alternative for asdf exists, although it would be better that such a program wasn’t needed at all.
Isn’t it somewhat difficult to pin collections of different versions of software for different directories with Nix?
Yes it is difficult. Nix is great at “give me Rust” but not as great at “give me Rust 1.64.0”. That said for Rust itself there aren’t third party repos that provide such capability.
I think you are pointing out that nixpkgs
tends to only ship a single version of the Rust compiler. While nixpkgs
is a big component of the Nix ecosystem, Nix itself has no limitations prevent using it to install multiple version of Rust.
Obviously nix itself has no limitation as I mentioned there are other projects to enable this capability. While you are correct I was referring to nixpkgs, for all intents nixpkgs is part of the nix ecosystem. Without nixpkgs, very few people would be using or talking about nix.
I thought that was the point of Nix, that different packages could use their own versions of dependencies. Was I misunderstanding?
What Adam means here is that depending on what revision of Nixpkgs you pull in, you will only be able to choose one version of rustc. (We only package one version of rustc, the latest stable, at any given time.)
Of course, that doesn’t stop you from mixing and matching packages from different Nixpkgs versions, they’re just… not the easiest thing to find if you want to be given a specific package version.
(Though for Rust specifically, as Adam mentioned, there are two projects that are able to do this easier: rust-overlay and Fenix.)
This is a great tool to find a revision of Nixpkgs that has a specific version of some package that you need: https://lazamar.co.uk/nix-versions/
That said, it’s too hard, and flakes provides much nicer DX.
for Rust specifically, […] there are two projects that are able to do this easier: rust-overlay and Fenix
The original https://github.com/mozilla/nixpkgs-mozilla still works too, as far as I know. I use it, including to have multiple versions of Rust.
No I wouldn’t say so, especially using flakes. (It gets trickier if you want to use nix to pin all the libs used by a project. It’s not complicated in theory, but there are different and often multiple solutions per language.)
nix-shell -p nodejs-18_x jq
nodejs -v
jq --version
Docs https://nixos.org/guides/declarative-and-reproducible-developer-environments.html or use https://devenv.sh/
These are some quick ways to get started:
Without flakes: https://nix.dev/tutorials/ad-hoc-developer-environments#ad-hoc-envs
With flakes: https://zero-to-nix.com/start/nix-develop
And add direnv to get automatic activation of environment-per-directory: https://determinate.systems/posts/nix-direnv
Or try devenv: https://devenv.sh/
(Pros: much easier to get started. Cons: very new, doesn’t yet allow you to pick all old versions of a language, for example.)
Hopefully making a little headway in an experiment writing a text editor in Rust using Bevy (inspired by @healeycodes’ blog post).
Curious if anyone knows of any good literature on the subject of designing and implementing text editors (or IDEs, I suppose). I found a few books and papers from the 90s but had trouble with anything more recent, other than blogs.
That blog post (re)inspired me too! Over the last year or so I have collected many bookmarks of blog posts detailing the implementation of text editors, but I haven’t found much beyond that. So I’m also interested in any information others have to share!
I’m all got telemetry, but it should be opt in. Even if the toolchain asks you for permission the first time you use it, this shouldn’t be a silent feature that is left as opt out.
What if it just warns you the first time? It would be annoying if the first time you try to use the Go tool in Docker you have to pass a --agree
flag to keep it from breaking.
No, the breakage is prompting for Y/N. It’s why every stupid command for apt in a Docker file has to include pointless -y flag.
I once worked on a project which hit a nice middle ground. Everything got a descriptive name, which was then collapsed into a three letter acronym. So you ended up using the acronyms which picked up a meaning of their own, but they were kind of mnemonic back to the original description if you got stuck.
And it was easier to change the meaning of an acronym as subsystem responsibilities changed.
Can you give some (sanitized/made up, if necessary) examples? I like your description, but I can’t exactly picture how it might work in practice… “Load Balancing Service” -> LBS? Or were your descriptive “input” names less generic than that?
I’m not the person you asked, but in a previous job at an online fashion retailer my team wrote the [Product] “Listing And Details” API, in a service more widely known by its acronym LAD. Later we tacked “Search And” to the front of the service name, and LAD became SALAD. Nobody used the services’ full names. (Other than perhaps when talking to the execs.)
Pretty much what @stig said in their reply. For example the Haskell Language Server is generally called HLS.
And LBS would be a perfect example too.
Everything got a descriptive name, which was then collapsed into a three letter acronym.
Hell, even that might eventually offend someone. Just off the top of my head: KGB, LSD, PMS, NSB might be offensive to certain groups of people.
IMO it’s not wrong to have a cutesy name, as long as you’re not intentionally being perverse or insulting with the name.
Looks a bit like a smaller Rust, maybe?
Stage Manager is designed to work well with Spaces, but when you’re first getting started I recommend that you don’t combine them, at least until you’re fluent with Stage Manager alone.
My experience was that Stage Manager only worked in the first Space, so I gave up and turned it off after five seconds. Maybe I should try again.
One odd note is that when you only have one window in a space, the “stage” seems to go away entirely until another window is open to occupy it. This is good in one sense because it saves space, but it also gives the appearance sometimes that one space is in Stage Manager mode and another is not. Unsure if that’s what you were hitting, but I’d second that you should give it another try.
I like Stage Manager a lot, but I desperately wish it were more keyboard-accessible. It’s so many clicks and drags to do so many of the workflows described in this post, many of which feel like they could be reduced to a single key combination. It also seems have some vague issues with application focus, where certain apps don’t get full focus back when you put them on the “Stage” from the “Cast.”
This is my impression as well, which is why I just can’t start using it. I’m also being put off with all the unnecessary animations that distract me all the time. I know I can turn on Reduce Motion but then I lose all the other UI animations elsewhere in the system that I have come to enjoy quite a bit over time (if I have to stare at a screen for 10 hours a day, I might as well enjoy a nice UI with nice, smooth animations, otherwise I can switch to Ubuntu or something).
Reimplementing some type theory and PLT papers in Ocaml and Reason to help get my interest in writing my own programming language built back up. Working on adding a “digital garden” style wiki to my Obsidian vault at the same time, which will hopefully allow me to better preserve some of my learnings.
Here’s some other microfeatures I like:
f"{var=}"
meaning f"var={var}"
{key}
being equivalent to {key: key}
… it rewards consistent naminglist[1..^1]
which is equivalent to Python’s list[1:-1]
. Using negative numbers to indicate counting from the end is cute but can lead to errors.obj?.prop
meaning something like obj && obj.prop
… but it’s definitely a microfeature and I guess is useful.x = new X {prop1 = val, prop2 = val}
which is kind of like x = new X(); x.prop1=val; x.prop2=val
. There’s lots of different constructor patterns like this… I’m not sure which I like best, though I do find simplistic implementations like Python and JavaScript to be tedious.Doesn’t that C# slice syntax work exactly the same as the Python syntax with negative numbers? Or is that what you were saying?
edit: I think I see what was being said now after reading some comments on this article from another site: C#’s method of using ^1
instead of -1
requires the explicit choice of the programmer, whereas Python syntax could allow a negative index to slip in as an arithmetic or logic bug.
Yes, exactly that… I know I’ve definitely encountered difficult bugs where a negative number slipped in and that changed the basic semantics of slices, but instead of getting an error or the empty list (what you get with, say, a_list[1:0]
) you get data that is totally plausible and also wrong.
I have a Remarkable 2, and while I can imagine the Scribe beats it in terms of refresh rate, the latest update to the RM2 gives it some very nice quality of life features for the notebook functionality, like infinite paper mode and text editing.
I believe with the latest update, RM2 supports every single feature OP mentioned.
I must chime in here as another happy user of an RM2. I’ve heard negative responses but it was the most amazing tool for my graduate research work. They’re still adding new features, too – once I thought “wouldn’t it be nice to be able to add blank pages to book PDFs for notes”, and soon enough that became a thing.
The only thing is that I would say the display / pen is not 100% precise (its precision can be influenced by magnets) and there Kindle Scribe might beat it out.
(Also, RM marks up the device and related items quite a bit. It was still a good investment as far as I’m concerned, but…)
(Also, RM marks up the device and related items quite a bit. It was still a good investment as far as I’m concerned, but…)
I’ve been wondering how expensive it actually is to produce the $1-each nibs. They’re made of felt, apparently, and they’re less than 1g each so it’s probably not the materials cost.
One big feature the scribe has is a 10.2” 300 DPI screen. I use a PineNote which I think has the same screen as the Remarkable 2 - 10.3” 226 DPI. Look forward to 300 DPI being a commodity panel in a few years, maybe I’ll be able to swap out the panel in the PineNote directly. It should be noted that 226 DPI is already better than the smallest commercially-available 4k LED monitor (the 24” LG 24UD58-B at ~194 DPI) so it’s not like it’s lacking, really. You do hold e-readers a bit closer than monitors, though.
How open/hackable is the RM2? Not asking from a purity/religious perspective, but it’s a fair bit of money when converted to dolaridoos and I imagine I’ll never be happy until I can work on my own UI experiments / features without being relegated to second-class citizen like in iOS or just “nope GTFO” like most other platforms.
While I recently reverted to the stock software in order to prepare for aforementioned software update, the RM2 runs Linux, and ships with an (admittedly slightly old) SSH server that you can access with a password tucked away in one of the “Compliance / Licensing” pages in the settings menu. You are a bit limited in what you can do on the Linux system out of the box, but I previously had installed Toltec which gave me access to a large repository of RM2 specific homebrew software, as well as the entware repository for more general *nix software on embedded systems. From there, you even have the ability to install entirely new “shells” or “desktops” (or however you want to think of it) if you choose.
You are absolutly right, RM2 has the best experience for drawing overaly and in fact I was about to buy RM2 until the very last moment, but I realized that the display is prone to some issues, like I’ve read from the reviews that It can crack easily, so I backed off
FWIW I’ve carried my RM2 around in a bag with only the fabric “folio” case on it since I got it shortly after release and had zero problems with the screen.
Over the last ~10 years, I’ve owned and heavily used a handful of Kindles, two e-ink Android devices, and the Remarkable 2. My only screen failure was on the original “big” Kindle (the DX) and even that happened well after the device was abandoned by Amazon.
lchat (line chat) is a line oriented front end for ii-like chat programs
This means there literally is nothing to screenshot - it’s just lines of text in a terminal. Anything visually noticeable would come from the shell/terminal.
That still can be shown. I have no idea what to expect from this so I won’t bother.
People working in Open Source need to learn a little bit of technical writing and marketing stuff.
I’d argue that people need to learn not to tell other people what to do in their free time. ;)
I also wouldn’t be surprised if at least some suckless devs try to avoid marketing on purpose.
On the technical writing bit. I wished more commercial tools came with proper man pages rather than huge pages of buzz words or a list of which company “uses” it.
As mat commented below, author posted his work here, so obviously want to show it and acquire potential users. It’s a waste of time creating something for others and not showing it’s capabilities with a clear, concise explanation. When I open source something for the benefit of the public, sometimes I spend MORE time properly documenting it and making it easier to use, showcasing what it can do and what value does it provide for the users.
No need for buzzwords, just describe me what the thing does in a couple of sentences/screenshots.
“Marketing” has real value, not just bullshit, but can be abused obviously. That’s not what I suggested here.
As for comment on free time, I’m so tired of the idea that if you doing something in your free time, you are not allowed to do it properly, or provide high quality work, I don’t understand that reasoning, that’s just nonsense.
younix posted their work here, so I feel like it’s probably safe to assume they were looking for feedback.
do you have any examples of tools that have a buzzword-laden marketing page but no man pages? this is not a phenomenon with which I am familiar
I’d argue that people need to learn not to tell other people what to do in their free time. ;)
You’ve just dismissed any critique of… literally anything completed outside of wage labor. This is not a useful contribution to any kind of discussion. ;)
How so? You didn’t criticize, you imperatively told people what to do. I did that in the same style as your sentence.
Also despite not quoting it I gave you a reason for why it might not be the way you want it to be.
Just for you to respond that my criticism of your statement is “not useful”.
and marketing stuff.
I’ve seen that claim - poor marketing - leveled at a few outfits including suckless and sourcehut.
I think it’s wrong, and in fact their marketing is excellent. Consider the niche they’ve gone after, and won.
I thought about that, too. In default its just “tail -f out” with an “>” input prompt.
The UI of lchat is outsourced to the .filter program/script and the other .title and .prompt files.
If you have something fancy there, than its worth to screenshot it.
I experimented with the filter/indent.c
program and some awk and sed scripts, but none of them is ready to use yet.
When someone creates some cool filter for lchat, I would publish them on the suckless website.
Yubikeys use a master key that is not settable by the user. I don’t trust any key that I don’t generate myself.
If I can’t produce the entropy myself, I don’t trust it.
https://developers.yubico.com/U2F/Protocol_details/Key_generation.html
“During credential registration, a new key pair is randomly generated by the YubiKey […] This master key is unique per YubiKey, generated by the device itself […] “
I found U2F working reasonably well on my solo2 keys, but since the PIV application is not done yet, those won’t work with age/passage yet.
getting a certificate error (cert doesn’t cover the subdomain), clicking through the warning works though
Your webserver gives another cert on IPv6 then on IPv4. The cert on a IPv4 connection is valid for git.icyphox.sh the cert on IPv6 is only valid for h.icyphox.sh.
Ah okay, I see the problem. I had a stray <ipv6_addr>.crt
file lying around in my /etc/ssl/
directory, and relayd was picking it up. Should be solved, I think.
I’ve been vaguely following the Gitea news, but I guess I didn’t realize there was any impetus to fork or move away from the existing codebase. I suppose I could see why Codeberg, with its own identity and concerns and policies, wouldn’t want to be subject to some Gitea company, but it’s harder for me to see why Forgejo would be any more valuable to me, as a random operator, than Gitea.
e: Would any of the other commenters planning on moving mind sharing some more of their reasoning, for someone like myself still unsure why I would want to switch to Forgejo?
One reason you might want to switch to Forgejo is that all the Gitea federation developers have joined the Forgejo project so Forgejo will be getting federation support much sooner than Gitea.
I want Haskell where the dependency tooling isn’t a PITA. Cabal has issues, so does Stack, and haskell.nix makes you download 5GB of data to use a project.
Whereas PureScript has issues with compatibility once you use a non-JavaScript back-end, there will inevitably be a lot of unforseen issue with the JavaScript back-end, but ultimately this work is good to see.
I mostly use guix this days, but haven’t really had any issue with cabal. Wonder what issues you’ve run into?
Cabal (and Stack too maybe? unsure) has improved a lot in the last couple of years. Sure it “has issues,” but I think most language specific package managers bring some form of baggage or another.
I know, vaguely, that modeling and formal methods have been explained at length numerous times on here, but could anyone point me to their personal favorite introduction to designing and writing models? I found this article compelling, but I’m not sure what choices make for good models, nor how to answer some of the questions raised in the article (eg the verification gap).
Here’s one of my favorite intros: modeling the browser same-origin policy in Alloy. To get a model of the same-origin policy, you have to model browsers, servers, URLs, and HTTP requests. Real-world implementations of these are obviously gigantic, but the model is small and digestible. As @ahelwer mentioned, Specifying Systems is like the holy grail of modeling books. Big +1 there.
I’ll also add that modeling itself doesn’t have to be tied to formal methods. I’ve modeled several calculations at work in spreadsheets, and they come in handy in a few different ways.
I’ve also created different models in some previous posts:
Property-Based Testing Against a Model of a Web Application uses an executable model to express the domain logic of a web app and use that in a test against the implementation - that’s one solution to the verification gap.
In Misspecification I built a model of a business-logic-esque feature and checked a property about it.
I’m biased, but I think these are good intros because they focus on “typical” business applications.
I’m a great fan of TLA+ so I recommend learning that. @hwayne on here wrote a textbook about it and has the website https://learntla.com/; personally I learned from the textbook Specifying Systems by Leslie Lamport.
Everyone new to formal specification worries about the verification gap. It is a question asked at every single intro presentation. It’s difficult to convince people but it is literally Not A Problem. If the core logic of your app changes enough that the spec becomes outdated, then of course you’d want to update the spec for the same reasons justifying writing the spec in the first place! If you’re firing off updates to the core logic of your system without such validation, well… what happens happens. Specs are usually high-level enough that it isn’t as though you need to be changing them with every commit, or even every ten commits.
It’s worth noting that most tools for formal models are tailored to a specific kind of problem (implicitly, if not by design) and so the right tool for the job depends on the problem that you’re trying to solve. For example, we use Sail for ISA modelling, but I wouldn’t recommend it for anything else (though, in theory, it should be able to model most forms of computation over things that can be mapped to bit vectors, to basically anything that you can compute on a real machine).
Whether you get to verification or not, simply having a formal model gives you a proof that a correct implementation is possible. This is not always true and I’ve seen people build quite complex systems before realising that the thing that their approach is unsound and having t throw it away and start again.
If you are proving things about your model then that does add quite a bit to the maintenance burden: redoing proofs when some of the early steps are no longer valid is time consuming. On the other hand, it should also give you more confidence that the approach is correct and reduce the need for changes. I generally like the approach of prototype, then formal model, then real implementation, then test the implementation against the model (tandem verification: do recorded execution traces through the implementation match permitted traces through the model?), the prove properties of the model. If your problem is small enough, then prove correspondence between the model and then implementation.
To put that concretely, for my current project we have a formal model of the ISA that we’ve used with an SMT solver to find some counter examples of properties that we now have, and thought we had earlier. Some friends in Oxford are proving that the Verilog implementation corresponds to the ISA spec. Some friends in Cambridge have previously proven some security properties of an older version of the ISA and we hope to rerun their proofs soon. The core of our software TCB is around 300 instructions. We can combine these with the ISA model to export something into a theorem prover and try to prove things about (hoping to start that next year), but the first step of that is unambiguously articulating what the properties that we require there are and that step alone is valuable for the security audit.
You’re probably looking for a small company that does only DNS, that has no loss-leader products, and whose CEO is the kind of person you might find on Lobste.rs. EasyDNS is that. Surely not the only such company.
luadns.com is great as well, I can’t recall whether the ceo is floating around here…
That is exactly what I want (less caring about the proclivities of the CEO though), and it took me to Namesilo a while ago. They’ve since gone down the “push add on services unless you’re a domain hoarder with hundreds of ‘investments’”
EasyDNS also mentions web hosting and email hosting - so seems like they too are not only DNS/Registrar.
Yes. No loss-leading or cross-subsidy, though, and they’ve never asked me whether I might perchance want to purchase their other products.
I don’t care whether a CEO posts here, but I do care that the CEO speaks and understands the language spoken here, and shares our values.