Another one in the long list of JavaScript tools that ditched JavaScript as their implementation language for performance reasons. Hopefully this is more easily usable by the time I have to work on a NodeJS project again, because the performance improvement numbers look incredibly promising.
Ok, question asked. Why write JS on the server when I could pick Java/PHP/Elixir/Go/Rust/Python/Ruby/C#/Zig/OCAML/Crystal/Nim/Perl/Kotlin/Scala/Lua/Haskell/Clojure?
I think some people are aiming for a single language as a stack. Because JS seems to not be going away anytime soon and there are so many backend languages, people were/are trying to aim for JS on the server. There are many backend choices but only one frontend choice. Therefore, to get one language, and end-to-end types, JS on the server. Yes, I understand JS avoidance and all the arguments against. Yes, I rolled my eyes when the server was discovered again.
If I question why I have two languages in my app then people move the goal posts and reduce app features. “I can just concat html text to the client using app.pl in /cgi-bin”. Sure, you always have had that option, that’s not what I mean. I mean for a certain size/complexity of application. I mean, just as one benefit or pro in the trade-off, if I have Go types they don’t go to my client. Or I have to / want to have some contract layer to sync the two. So I end up with two languages and some contract between them. In theory, you don’t have that with trpc/typedjson/tanstack/etc etc. Because your types are full-stack.
So when people are talking about Go replacing Typescript this is still Typescript dev. It’s a tool written in Go to write/check/build Typescript. If you wanted to avoid NodeJS, you would have to look at things like Deno or Bun.
There are many backend choices but only one frontend choice.
With so many languages taking on WASM targets, I think that’s becoming less true. And even before that, there are quite a number of compilers targeting JS. Of course there are drawbacks, and these approaches aren’t always practical for every web front-end project, but I do think “only one frontend choice” is overstating the case.
I can see wasm for tight loops, canvas. I don’t see it for general stuff that also requires CSS but I guess we’ll see. There should be lots of sites made and they should look good and work well for users if it’s easy to do and aligns with their backend job role and backend interests.
I think the reality is that most people are asking that same question and coming to that same conclusion. The theory of “frontend devs can own the API layer” hasn’t really played out as well as people had hoped and I know plenty of JS developers who are just as happy to write Go if it comes to it anyways.
10 years feels like an absurdly short period for a consumer device that doesn’t see much wear and tear, Google really dropped the ball there. At least there’s a way to invoke a debug menu and bypass the auth step as a temporary solution.
This manner.php seems to only support -man macros so I wonder how they render -mdoc pages. (I don’t know of any tools to convert -mdoc to -man other than mandoc.)
Very cool stuff! I used to follow along with the development of mold when it was approaching a v1.0.0 release and found the choice of a rather straight-forward (even if non-portable) bash based integration test suite pretty interesting. The test runner for Wild is much more complex but seems well set up for extensibility as and when the scope of things under test starts expanding.
I personally don’t feel comfortable publishing secrets publicly in encrypted form. I don’t have strong reasons against doing so to provide for you. My reasoning mostly comes down to not trusting encryption as infallible and defense in depth.
Same. This is somewhat normalized in the NixOS community and drives me a bit nuts.
Encrypted secrets are only secure to the degree that cracking them is prohibitively expensive. Presently even RSA 1024 keys are still too hard to brute force. Unless Quantum has a big breakthrough, or more likely their’s an algorithmic weakness in the current tooling, I doubt we’ll see currently encrypted secrets be broken in 15-20 years
I use sops based secrets for my NixOS machines and have the secret files in a public repo. The level of comfort in that for me is mostly rooted in how quickly can I rotate secrets in the situation that I do end up making a mistake, which is extremely easy with age so it does not worry me too much.
I do also have an older repository that uses git-crypt which I haven’t migrated mostly out of laziness, and that I keep private because of the aforementioned key rotation problem which I personally find to be a complicated process with PGP.
It’s not any different to e.g. typing passwords/bank details into a website protected by TLS, where ciphertexts are not considered secret due to the possibility of eavesdropping.
Yes it is. In the TLS use case the attacker has to be in a privileged position in the network, at the right time. If you publish your git-crypt files, the attacker can be anyone, globally, and they can at any time decide to make a backup of your repo on a whim, just in case the crypto gets broken in 25 years or so.
… Just in case your key is leaked later somehow (eg exploit that can read ram, or file+keylogger).
Morever, say you publish encrypted secret at some point, then you can no longer effectively rotate any decryption key you think may have been compromised in the entire history of your project, and project membership (new junior dev X did what with their private key file?!) - someone might have a copy of the file encrypted with a (now) compromised key.
Offline attacks 25 years later don’t matter if the secrets in question are being rotated every “24 hours”, and the outer envelope key is also rotated regularly.
The TLS example is a pretty good analogue in the sense that every session has a different key. If we can get closer to that, here, you’re doing pretty good.
Where the TLS analogue breaks down is that the plaintext of the captured sessions might still be useful 25 years later. Encrypting secrets used to gain access to systems that have data, are much less likely to be useful 25 years later.
In my case I was storing secrets in git that could not be easily rotated. If you want easy rotation then git is not a good choice :-) I saw it as a stop-gap in the absence of a platform that could provide something like Hashicorp Vault.
A significant concern for me was whether the decryption keys might leak, the risks of staff turnover, things like that. WRT my other comment on this thread with proper separation of secrets and metadata it might even be possible that devs never need the decryption keys, which could reduce these worries a lot. But I never managed to get that far, and in most cases I would recommend spending the effort on a proper credential store instead.
wait, you can make a keylogger with a kernel feature? Quick, rewrite it all! Kernels really only have one job anyway - running containers - so there should be no problem quickly migrating everybody over to my new Kayland system which is immune by design to this kind of crippling security flaw.
It’s a weak meme on Wayland being considered more secure than X11 as a reason for increasing its adoption. I also don’t know why they chose to post that.
I confess I’m mostly just goofing around, but there are some seeds of reason under here: eBPF was originally intended for (as I understand it anyway) pretty specialized network tasks, but has since expanded and has become pretty general purpose with a lot of access points. You wrote some Rust that compiles to it, reads keys, and piggy-backs them on to dns requests - pretty sure this wasn’t the original intent of the feature, but it didn’t even take that much code to do! (I think your code is clever and p easy to follow even though I don’t know much about this area. These things amuse me which is why I clicked the link.)
I’ve seen people worry that eBPF is too general now and that can have security implications (p sure it used to be usable by non-root users too and they decided to lock it down a bit so your code has to sudo), and I’m sure there’s some people who would legit like to rip it out, but these things are just too useful, so even if it was ripped out, there’d be pressure to put it (or something like it) back anyway! For every misuse, there’s plenty of good uses. It isn’t really a security flaw or a useless feature!
I like that eBPF has evolved to include more areas of the kernel, that makes it easy to add a new functionalities. And as you said “For every misuse, there’s plenty of good uses”, that summarize perfectly.
re: age - the author calls out in the post that there are two implementations - (age and rage) which is cool. But for me, the most important aspect is that age actually has a specification: https://age-encryption.org/v1 - so technically anyone can implement age in any language in which the crypto primitives are available. And once you are done, you can validate your implementation with: https://github.com/C2SP/CCTV/tree/main/age
This is definitely a huge plus, I helped write a Kotlin/JVM implementation for use in Android Password Store and we got great value out of the standardized test suite which paired nicely with JUnit’s dynamic tests feature to give us robust coverage for very little code.
I am aware that you have retired from the Android Password Store project. (Thank you for your work on it. I use it everyday.) But did age support make it into the app before that?
I’m a long term K-9 Mail user on android, I installed the beta and didn’t really notice any difference. Wondering now if this means I’ll have to swap across
They’ve gone to great lengths to ensure the same codebase can continue producing K-9 Mail and Thunderbird builds so I don’t think there’s an immediate need to switch. If the time comes when K-9 Mail gets sunset, they already have proper support for migrating your configuration between the two so it should be relatively painless.
they already have proper support for migrating your configuration between the two so it should be relatively painless
I switched from k9 to the thunderbird beta a month ago on Android. For the easiest config export/import at the time you had to be running the latest k9 beta, which I side-loaded. I switched yesterday from tbird beta to release and it imported the config directly from the app. I had to go through OAuth with Google, app tokens from other email providers came across. OpenPGP config didn’t but I can’t remember the last time I sent from my phone an encrypted or signed email, rather than just verifying a signature, so it may have been broken before that.
Hi, I’m the guy in the picture! I’ll keep an eye out for any questions here. Short PSA attached below.
Is the app still safe to use?
For the most part. The last stable release (v1.13.5) uses OpenKeychain for PGP and SSHJ 0.31.0 for SSH. Usage of the app can be considered secure until a vulnerability in either component is discovered. The nightly build uses SSHJ 0.39.0, and BouncyCastle 1.78.1 and PGPainless 1.6.7 for PGP and similar guidance applies here.
Depending on your threat model, installing GnuPG and the pass CLI in a Termux environment might be more appealing.
Is there a fork coming?
So far I’ve only seen one person fork the repository and their first step was to remove a bunch of things including Autofill, so I don’t think a general purpose fork is being cooked up just yet. If someone feels inclined to get started on one, feel free to contact me and I’ll do my best to help you along.
Oh no! What is going to be the replacement in this space as this was a good, often-used app on F-Droid. I mean it feels like it’s mostly complete & not many more features to add, but how long can I expect it to stay compatible with Android versions?
The last stable release I tagged ~3 years ago already isn’t compatible with Android 14 and above, due to requiring broad access to device storage.
The latest nightly build does work on current Android releases by dropping support for Syncthing-like workflows, but is using a new PGP backend built on PGPainless that is missing support for smartcards.
I announced the decision to archive Android Password Store over the weekend, so mostly dealing with the work of communicating that to as many users as I can and then cleaning up the bits and pieces scattered around the web such as Crowdin and OpenCollective.
His account is private, that thread contains a link to https://www.internethistorypodcast.com/. I don’t wanna paraphrase anything he said incorrectly so I’ll let someone else seek permission to reproduce his comments here.
About the inability to switch in a perlless system, is the system.switch.enableNg option not capable of it yet? I use this in my machines but haven’t attempted to use the perlless module yet.
This post was written back in March, when the perlless switcher wasn’t released yet. I haven’t tested the new switcher yet (currently working on other parts of the project), but I’m hoping it’ll work well once I get to it!
Another one in the long list of JavaScript tools that ditched JavaScript as their implementation language for performance reasons. Hopefully this is more easily usable by the time I have to work on a NodeJS project again, because the performance improvement numbers look incredibly promising.
This begs the question, why write Javascript on the server if Go is there.
Watching this industry choose js in a lot of places they don’t have to (i.e. anywhere but the browser) has been strange to see.
Single language stacks are awesome to work in. That’s why I write my frontends in Rust, but I understand TS devs going the other way.
It’s such a bad language and ecosystem. Typescript barely improves anything there.
Ok, question asked. Why write JS on the server when I could pick Java/PHP/Elixir/Go/Rust/Python/Ruby/C#/Zig/OCAML/Crystal/Nim/Perl/Kotlin/Scala/Lua/Haskell/Clojure?
I think some people are aiming for a single language as a stack. Because JS seems to not be going away anytime soon and there are so many backend languages, people were/are trying to aim for JS on the server. There are many backend choices but only one frontend choice. Therefore, to get one language, and end-to-end types, JS on the server. Yes, I understand JS avoidance and all the arguments against. Yes, I rolled my eyes when the server was discovered again.
If I question why I have two languages in my app then people move the goal posts and reduce app features. “I can just concat html text to the client using app.pl in /cgi-bin”. Sure, you always have had that option, that’s not what I mean. I mean for a certain size/complexity of application. I mean, just as one benefit or pro in the trade-off, if I have Go types they don’t go to my client. Or I have to / want to have some contract layer to sync the two. So I end up with two languages and some contract between them. In theory, you don’t have that with trpc/typedjson/tanstack/etc etc. Because your types are full-stack.
So when people are talking about Go replacing Typescript this is still Typescript dev. It’s a tool written in Go to write/check/build Typescript. If you wanted to avoid NodeJS, you would have to look at things like Deno or Bun.
With so many languages taking on WASM targets, I think that’s becoming less true. And even before that, there are quite a number of compilers targeting JS. Of course there are drawbacks, and these approaches aren’t always practical for every web front-end project, but I do think “only one frontend choice” is overstating the case.
I can see wasm for tight loops, canvas. I don’t see it for general stuff that also requires CSS but I guess we’ll see. There should be lots of sites made and they should look good and work well for users if it’s easy to do and aligns with their backend job role and backend interests.
I think the reality is that most people are asking that same question and coming to that same conclusion. The theory of “frontend devs can own the API layer” hasn’t really played out as well as people had hoped and I know plenty of JS developers who are just as happy to write Go if it comes to it anyways.
Why write Go if C is there?
Why write C if hand-optimized assembly is there?
Mostly going through my backlog of client reported issues at work, and hopefully a few more chats with the new manager who came in last week.
On the personal front I will be reading up this series and follow it to implement ActivityPub for my blog.
10 years feels like an absurdly short period for a consumer device that doesn’t see much wear and tear, Google really dropped the ball there. At least there’s a way to invoke a debug menu and bypass the auth step as a temporary solution.
Yeah, disabling the certificate check through Activity Manager worked for me, but this is pretty embarrassing for Google.
This should be merged with https://lobste.rs/s/d8ydvt/command_conquer_red_alert_source_code
Oh sorry, didn‘t see that :/ You are right
Is your version of this that I can run myself? I’d like to host it on my own website.
The tool used to generate the pages seems to be open source, not too sure about the styling and such.
Ooh, they aren’t using mandoc!
This manner.php seems to only support
-manmacros so I wonder how they render-mdocpages. (I don’t know of any tools to convert-mdocto-manother than mandoc.)Very cool stuff! I used to follow along with the development of mold when it was approaching a v1.0.0 release and found the choice of a rather straight-forward (even if non-portable) bash based integration test suite pretty interesting. The test runner for Wild is much more complex but seems well set up for extensibility as and when the scope of things under test starts expanding.
Considered doing something similar with the same use case of public git repo for infra, but with age instead of git-crypt.
Even knowing that the files are encrypted, it just feels wrong to post them publicly on GitHub.
Do other people feel the same? Is more people doing this? Good reasons in favor or against?
I personally don’t feel comfortable publishing secrets publicly in encrypted form. I don’t have strong reasons against doing so to provide for you. My reasoning mostly comes down to not trusting encryption as infallible and defense in depth.
Same. This is somewhat normalized in the NixOS community and drives me a bit nuts.
Encrypted secrets are only secure to the degree that cracking them is prohibitively expensive. Presently even RSA 1024 keys are still too hard to brute force. Unless Quantum has a big breakthrough, or more likely their’s an algorithmic weakness in the current tooling, I doubt we’ll see currently encrypted secrets be broken in 15-20 years
I use sops based secrets for my NixOS machines and have the secret files in a public repo. The level of comfort in that for me is mostly rooted in how quickly can I rotate secrets in the situation that I do end up making a mistake, which is extremely easy with age so it does not worry me too much.
I do also have an older repository that uses git-crypt which I haven’t migrated mostly out of laziness, and that I keep private because of the aforementioned key rotation problem which I personally find to be a complicated process with PGP.
It’s not any different to e.g. typing passwords/bank details into a website protected by TLS, where ciphertexts are not considered secret due to the possibility of eavesdropping.
Yes it is. In the TLS use case the attacker has to be in a privileged position in the network, at the right time. If you publish your
git-cryptfiles, the attacker can be anyone, globally, and they can at any time decide to make a backup of your repo on a whim, just in case the crypto gets broken in 25 years or so.… Just in case your key is leaked later somehow (eg exploit that can read ram, or file+keylogger).
Morever, say you publish encrypted secret at some point, then you can no longer effectively rotate any decryption key you think may have been compromised in the entire history of your project, and project membership (new junior dev X did what with their private key file?!) - someone might have a copy of the file encrypted with a (now) compromised key.
Offline attacks 25 years later don’t matter if the secrets in question are being rotated every “24 hours”, and the outer envelope key is also rotated regularly.
The TLS example is a pretty good analogue in the sense that every session has a different key. If we can get closer to that, here, you’re doing pretty good.
Where the TLS analogue breaks down is that the plaintext of the captured sessions might still be useful 25 years later. Encrypting secrets used to gain access to systems that have data, are much less likely to be useful 25 years later.
In my case I was storing secrets in git that could not be easily rotated. If you want easy rotation then git is not a good choice :-) I saw it as a stop-gap in the absence of a platform that could provide something like Hashicorp Vault.
A significant concern for me was whether the decryption keys might leak, the risks of staff turnover, things like that. WRT my other comment on this thread with proper separation of secrets and metadata it might even be possible that devs never need the decryption keys, which could reduce these worries a lot. But I never managed to get that far, and in most cases I would recommend spending the effort on a proper credential store instead.
wait, you can make a keylogger with a kernel feature? Quick, rewrite it all! Kernels really only have one job anyway - running containers - so there should be no problem quickly migrating everybody over to my new Kayland system which is immune by design to this kind of crippling security flaw.
I am not sure if I get your point but there is no security flaw that is exploited here.
It’s a weak meme on Wayland being considered more secure than X11 as a reason for increasing its adoption. I also don’t know why they chose to post that.
hahaha yeah I see
I confess I’m mostly just goofing around, but there are some seeds of reason under here: eBPF was originally intended for (as I understand it anyway) pretty specialized network tasks, but has since expanded and has become pretty general purpose with a lot of access points. You wrote some Rust that compiles to it, reads keys, and piggy-backs them on to dns requests - pretty sure this wasn’t the original intent of the feature, but it didn’t even take that much code to do! (I think your code is clever and p easy to follow even though I don’t know much about this area. These things amuse me which is why I clicked the link.)
I’ve seen people worry that eBPF is too general now and that can have security implications (p sure it used to be usable by non-root users too and they decided to lock it down a bit so your code has to
sudo), and I’m sure there’s some people who would legit like to rip it out, but these things are just too useful, so even if it was ripped out, there’d be pressure to put it (or something like it) back anyway! For every misuse, there’s plenty of good uses. It isn’t really a security flaw or a useless feature!I like that eBPF has evolved to include more areas of the kernel, that makes it easy to add a new functionalities. And as you said “For every misuse, there’s plenty of good uses”, that summarize perfectly.
If you were making a case for a microkernel, you’d be correct. Microkernels are superior in both security and robustness.
Mostly just going through my blog reading list, and figuring out selling my old PC and scavenging parts for a new homelab.
re: age - the author calls out in the post that there are two implementations - (age and rage) which is cool. But for me, the most important aspect is that age actually has a specification: https://age-encryption.org/v1 - so technically anyone can implement age in any language in which the crypto primitives are available. And once you are done, you can validate your implementation with: https://github.com/C2SP/CCTV/tree/main/age
This is definitely a huge plus, I helped write a Kotlin/JVM implementation for use in Android Password Store and we got great value out of the standardized test suite which paired nicely with JUnit’s dynamic tests feature to give us robust coverage for very little code.
I am aware that you have retired from the Android Password Store project. (Thank you for your work on it. I use it everyday.) But did age support make it into the app before that?
Unfortunately not
This line caught me off guard 😂
I’m a long term K-9 Mail user on android, I installed the beta and didn’t really notice any difference. Wondering now if this means I’ll have to swap across
They’ve gone to great lengths to ensure the same codebase can continue producing K-9 Mail and Thunderbird builds so I don’t think there’s an immediate need to switch. If the time comes when K-9 Mail gets sunset, they already have proper support for migrating your configuration between the two so it should be relatively painless.
I switched from k9 to the thunderbird beta a month ago on Android. For the easiest config export/import at the time you had to be running the latest k9 beta, which I side-loaded. I switched yesterday from tbird beta to release and it imported the config directly from the app. I had to go through OAuth with Google, app tokens from other email providers came across. OpenPGP config didn’t but I can’t remember the last time I sent from my phone an encrypted or signed email, rather than just verifying a signature, so it may have been broken before that.
Hi, I’m the guy in the picture! I’ll keep an eye out for any questions here. Short PSA attached below.
For the most part. The last stable release (v1.13.5) uses OpenKeychain for PGP and SSHJ 0.31.0 for SSH. Usage of the app can be considered secure until a vulnerability in either component is discovered. The nightly build uses SSHJ 0.39.0, and BouncyCastle 1.78.1 and PGPainless 1.6.7 for PGP and similar guidance applies here.
Depending on your threat model, installing GnuPG and the pass CLI in a Termux environment might be more appealing.
So far I’ve only seen one person fork the repository and their first step was to remove a bunch of things including Autofill, so I don’t think a general purpose fork is being cooked up just yet. If someone feels inclined to get started on one, feel free to contact me and I’ll do my best to help you along.
Oh no! What is going to be the replacement in this space as this was a good, often-used app on F-Droid. I mean it feels like it’s mostly complete & not many more features to add, but how long can I expect it to stay compatible with Android versions?
The last stable release I tagged ~3 years ago already isn’t compatible with Android 14 and above, due to requiring broad access to device storage.
The latest nightly build does work on current Android releases by dropping support for Syncthing-like workflows, but is using a new PGP backend built on PGPainless that is missing support for smartcards.
Interesting. Seems my usage would be labeled as “basic” so I hadn’t noticed.
I announced the decision to archive Android Password Store over the weekend, so mostly dealing with the work of communicating that to as many users as I can and then cleaning up the bits and pieces scattered around the web such as Crowdin and OpenCollective.
This is the official advisory for the security issue in ACF that WordPress.org deemed severe enough to stage a hostile takeover of the extension: https://wordpress.org/news/2024/10/secure-custom-fields/
Five days after ACF’s own update! Extraordinary.
Sadly, I agree with what Graydon said recently in this thread: [redacted]
Edit: didn’t realize that the toot is private :/
That link does not load for me.
His account is private, that thread contains a link to https://www.internethistorypodcast.com/. I don’t wanna paraphrase anything he said incorrectly so I’ll let someone else seek permission to reproduce his comments here.
About the inability to switch in a
perllesssystem, is thesystem.switch.enableNgoption not capable of it yet? I use this in my machines but haven’t attempted to use the perlless module yet.This post was written back in March, when the perlless switcher wasn’t released yet. I haven’t tested the new switcher yet (currently working on other parts of the project), but I’m hoping it’ll work well once I get to it!