I’m astonished that they can store and search all the messages on all Discord instances in just 72 nodes with 9TB storage each. I’m on a few and it seems like some people post thousands of messages a day! And they appear to stay forever.
(I work at Discord on the Persistence Infra team)
We also run many Elasticsearch clusters to handle searching through all of the messages.
I would love to hear your thoughts on managing such a big ES cluster. The few times I’ve bumped into ES I felt like it was a very capable database but always a total bear from the operations side, and I imagine your cluster is 10x bigger than anything I’ve dealt with.
Is that also why you don’t allow for exact search (which can be extremely annoying), being a different kind of index?
I’m quite surprised it’s that big. Text is small. Wikipedia is 86 GiB for all of the text of the current English version. You could easily index and search that in RAM on a single moderately powerful server. I hadn’t realised how much more people type in Discord.
I would definitely expect Discord to have more text than Wikipedia. Chat is append-only, whereas Wikipedia articles are edited so don’t necessarily grow with the number of edits. People also socialize more than they write encyclopedia articles.
IIRC Wikipedia stores all edit history and lets you diff the versions on the website? But yeah that wouldn’t count against the downloadable dump size.
Discord has over 100mm active users, and you also have overhead per message. That adds up fast with that many people.
There’s Njalla, good for your privacy: https://njal.la/
Which, to be clear, isn’t technically a registrar but rather a privacy provider sitting in between you and a registrar. I highly recommend it for personal websites at least though. It completely isolates you from ICANN nonsense regarding personal info, there’s nowhere to put your physical home address.
Helix with this config. Coming through Kakoune rather than vim directly, I have added all the things to Helix’s normal mode to avoid ever needing select mode.
"{" = "goto_prev_paragraph"
; "}" = "goto_next_paragraph"
is a habit all the way from the vim days though :)
Are you aware of specific parts of std that are bloated due to generics?
I’m not an authority on this, but here’s what I know. The base overhead that std adds to each binary is primarily due to the backtrace functionality. Statically linking the code to walk the stack, read debug info, etc. adds considerable overhead. This code can be removed, but only using the unstable build-std feature of cargo and the unstable panic_immediate_abort feature of std, as documented in min-sized-rust.
I think most people complaining about the size would like to see some way that you can include only the part of the std which your program actually uses. Which IIRC is something we won’t see happening that fast.
It’s already there… With LTO, only the part of std used is included. Two parts of std are problematic for this: one is backtrace, as already mentioned. Another is formatting: it uses dynamic dispatch so unused code may be included due to analysis conservatism. Otherwise, all unused parts of std should be removed by LTO. If not, please file a bug.
Oh interesting to know, didn’t realize LTO does that already. What exactly do I have to set for that ? Because I’ve seen various values for LTO=true
, including “fat” and with multiple answers whether it makes a difference.
I do wish there were more convenient selection-mode shortcuts for “beginning of line” and “end of line” than home and end, something closer to the home row. Might have to conjure forth my own?
Check out gs
, gl
, etc. Lots of useful stuff under g
for this!
Big challenge going from vim is that single-command jumps (eg |$) are missing - I do think the discoverability and consistency of “g” wins out (ge - gO eND BUFFER (last line), gd - gO TO dEFINISHION, gh - line head (start) etc). (Discoverable, as g pops up menu for g-what).
yeah, I hit $
a lot when switching from Vim to Kakoune, although thankfully the transition to Helix was easier. I actually appreciate having to hit fewer modifier keys in these situations. The same amount of keys in a sequence instead of a chord is just nicer for my hands.
I did try it first - but concluded it just got in my way of using Helix properly. On surface it might seem a bit like “different vi” - but in reality it’s quite different. And I think, probably, better (too soon to tell).
The thing that really put me off Helix is that I can’t find an ergonomic way to go from initial to initial among different words.
In vim I’d just do w
, in Helix it’s something like wl
? Or wb
? I’m not sure. Either way it seems like a strange thing not to provide a 1-word command for.
That and the lack of an ecosystem. At this point, Neovim with its lua plugins is starting to look more and more like an IDE, and Helix feels too barebones in comparison.
You can customize the keymap, but I’m not sure what the use case for “initial to initial” would be here? With a selection-first system, you do just use w
to jump around words, keeping the last word selected and the cursor on the space, and then you can e.g. cut that word with d
or whatever.
Not needing to have an ecosystem to manage has been a blessing. It’s just a simple tool with a simple short declarative config, without extensibility to manage, and yet it has all the features I ever needed and more. That said, some wasm based plugin support is being prototyped AFAIK.
idk, say for example I want to make a function public in go, and want to capitalize it, so from foo
to Foo
. In vim I’d just go to the beginning of the word (maybe by pressing w
a couple times) and change one letter , in Helix I’d have to press wb
.
Just seems counter-intuitive to me. I guess I could customize the keybinding, but then if I start doing so, what’s the selling point of Helix?
Indeed “wc” to replace/change word via insert, or “wi” to insert before etc. (and “b” for previous word).
I’d be curious to hear what vim usage of w-do_something is missing? We all use such different subsets of vim…
I use kakoune, which is a closer cousin to helix than vim. w
indeed takes you to the beginning of the next word with a-w (I don’t love the alt keybindings, but the consistency of extend on W
is hard to pass up) taking to you the beginning of the next WORD. When I came from vim, I might have been confused by having my cursor be after the word instead of at the beginning, but since different actions act on the beginning of the selection and end, it’s like you always have two cursors. So, wi
is insert at beginning of next word, same as vim, but you also get wbiFOO<esc>aBAR<esc
to surround your word with FOO and BAR. Admittedly, it’s a little weird that I have to do wb, but vim would be about the same. wiFOO<esc>eiBAR<esc>
. If you’re in the middle of a word, you can do <a-i>wc
which changes the word from anywhere in the word, similar to ciw
except that I can use that selection after doing my edit where in vim I would have to repeat the object. I’ll note that chorded commands in kakoune aren’t my favorite although I don’t have a solution. (Note that is a chord also.)
Kakoune isn’t a better model and I’ve never used neovim, although I have a fair amount of vim7 and vim8 experience. I’m happier both when using kakoune without extensions and when adding new plugins. I spend less time configuring my text editor than I did with vim, and when I am configuring my editor, it fits my mental model much better. Once of the things that helped me switch is that I found kakoune much more responsive / faster than vim8 even without adding much plugins especially for mass edits. I did want to make the point that I don’t think either editor is substantively worse on their core navigation except for bufffer selects and changes. I much prefer doing buffer selects and changes to the macro composition / %s
that I used to write. When that comes up, it is a big productivity / ease gap.
That and the lack of an ecosystem.
Great example of the fact that different people want different things, for me not needing an ecosystem is one of Helix’s strengths. XD But I also am pretty happy dealing with plain text without IDE-ish features, most of the time. Different type of workflow!
I appreciate that these custom-built keyboard posts are now adopting modern ergonomic designs. Seen too many minimalist flat slabs with retro colours and the loudest keyswitches anybody’s ever heard of. If I wanted to destroy my wrists like that it would be less work to just hit them with a hammer.
Well, mine does have the loudest switches too :) but yeah, for a flat slab typically you don’t need to design a custom PCB because there are so many existing ones already. A weirder layout like mine is a good reason to get into PCB design.
I do love the feel and the sound of blue switches.
I recently started using a red switch keyboard (for gaming at a vacation condo) and I really liked the feel of that, too. Very quiet.
Interestingly, when my son uses my blue switch keyboards, it sounds like barbarians invading, and everyone complains, but when I use them, no one notices the sound. I’m pretty sure that the “attack” that one has with the keys changes the sound quite dramatically. I learned to type on one of those IBM Selectric (?) typewriters with the ball head and the unmistakable audio that comes with it, and all my early keyboards for computers were the clicky IBM style keyboards, so the blue keys are my touchstone back to those fond memories.
Amazing DIY effort!
Check out keymouse (dot com) for the original split with a trackball. It used to be wireless (that’s the one I run), but they gave up with that and went back to wired.
Thanks! Ha, keymouse seems… interesting. I’m not sure I would actually use two trackballs, heh. I’d probably get used to only using one with my right hand.
took more than a fair bit to get used to, but an eye tracker combined with a 3d mouse - scale it down and combine in a way like this (https://hackaday.com/2017/07/27/unholy-mashup-of-spacemouse-and-sculpt-keyboard-is-rather-well-done/ ) and it might be something. Twisting left/right is a nice scroll up/down, lifting/pushing as zoom in/out and the tilting to pan. Eye tracker gets you the coarse initial warp-to point while the 3d mouse adds the missing precision.
I have scroll horiz and scroll vert on two different layers, so one thumb press (not even sure which one now since it’s all muscle memory now) and I’m scrolling away.
Are you saying that you actually use an eye tracker for this? And it works well? Holy cow that is freaking amazing!
I have both the original Alpha model and the Track. Both are good, but I far prefer the track. (I have had serious RSI issues over the years.)
I do mostly use the track ball in the right hand, but I do a bit of both. It’s pretty neat … worth trying if you can get your hands on one. And Heber (the founder) is a tech guy who has a passion for it … definitely not a get-rich-quick scheme 🤣
How do you like the thumb clusters? With the distance between the thumb keys and the trackball, it seems like you have to stretch the Thumb quite a bit to move to one or the other?
(I thought I had pretty much seen it all after following /r/ergomechboards for a while, but somehow hadn’t seen keymouse yet…)
I have the original and not the current. Thumb clusters are good. Personally, I’d drop the furthest thumb reach (the down and away button) and add two proper keys at the top of the existing cluster. Also, I don’t use the outside pinky column at all (I have a completely custom layout). Any extra reach is really hard on RSI, so I do 99% on 3 rows (I rarely use the numbers row) plus the thumb keys, with only one-key horizontal stretch on either the forefinger (easy) or pinky (less easy). I generally write 50-100kloc per year, plus lots of non-code stuff. No RSI in years now.
Great to hear! I have switched to a Kinesis Advantage a couple months ago and my wrist pains have disappeared. But I am still very much interested in designs that push the state of the art forward.
Kinesis are great keyboards. I have 2 of the Advantage Pros (with foot pedals) 🤣 and that’s all I used for many years.
It’s been a while since I’ve used them, but the foot pedals are mainly for modifier keys or changing layers (e.g. accessing Kinesis macros). The Kinesis models that I have are a bit old now, so they don’t have the amazing level of programmability that you’d expect today from new keyboards with built in ARM chips or whatever. But I did use them to write a few software products (i.e. I personally typed in many hundreds of thousands of lines of code) without inflaming my horrible RSI, so I have a great deal of love and appreciation for Kinesis 😊 … so if you’re in doubt, always give Kinesis the benefit of the doubt by default.
On my Keymouse setup, I’ve added a dedicated layer for each hand to put all of the modifiers (ctrl, shift, alt, cmd) on home row. So one layer turns the left hand into a dedicated modifier set (and leaves the right hand unchanged), and another layer turns the right hand into a dedicated modifier set (and leaves the left hand unchanged). Then I have a layer for num pad (left hand is all modifiers, right hand is num pad), and a layer for function keys (left hand is all modifiers, right hand is all function keys). Here’s my layouts as of a year ago: https://1drv.ms/w/s!Al7tOqyQS2IveWlYnwO2D9msNHE
(Edit: I should explain a bit about the layers. I often have to type crazy combos like shift-cmd-8 or alt-command-f7 or whatever. This is the IDE keystroke hell that programmers have to deal with sometimes to avoid the mouse, e.g. in the amazing IntelliJ IDEA debugger.)
Coincidentally, this week I had hand surgery (unrelated to RSI) and I now have a literal club hand wrapped with an inch of protective stuff with a few fingers semi-sticking out, so for the first time in years, I’m using a normal keyboard 🤣
That’s a pretty nice New Year’s gift! And my phone was included on day one, unlike with 19 when it took months! Though this seems to be because Android 13 isn’t such a big change.
nothing else fills the niche for ‘C/C++ but strongly safe’.
Swift and Nim both have pretty good safety properties, compile to native code, and interop pretty well with C APIs. But neither offers as clearly delineated a boundary between safe and unsafe code as Rust does. (A few years ago I argued for adding something like Rust’s unsafe
keyword to Nim, but most of the community opposed it.)
Go has good safety, but the NIH attitude of its runtime (aka “let’s pretend we’re running on Plan 9”) makes it unpleasant to interoperate with non-Go code.
We need a replacement C because in general we can’t write safe C/C++ at scale.
Agreed, although I believe C++ with modern idioms and tools is a much, much safer language than C. By which I mean avoiding new
and delete
, using RAII, enabling and paying attention to most compiler warnings, and using the UB and address sanitizers during development.
I like Go, but it lacks the concurrency safety that Rust has, and also the runtime minimalism common to Rust and C (partly because Go has both garbage collection and concurrency support). There are a lot of projects that either need or want C style minimalism in resource usage and runtime requirements; they’re not going to adopt Go. Go is also hard to partially adopt due to its runtime requirements among other things, while Rust offers a migration path of gradual and partial adoption in a codebase.
(I’m the author of the linked-to entry.)
I think concurrency in Go has been somewhat overblown, and over time it’s become clearer that while goroutines are very cool, they’re a very blunt instrument that are useful for a smaller set of problems than originally thought.
In particular, Bryan Mills has a very good talk on this: https://drive.google.com/file/d/1nPdvhB0PutEJzdCq5ms6UI58dp50fcAN/view (I would note a number of people in the YouTube comments say the examples are hard to understand… IMHO this isn’t Bryan’s fault but rather an indicator that the underlying problem of concurrency being difficult to understand has been there the whole time; writing this stuff safely in a simple manner in Go is quite hard).
My view is you either need goroutines at a very low-level (working directly with I/O) or a very high level (spawning multiple listeners of something like an HTTP server). You’ll generally find a library that does these things better than you can, and most people most of the time shouldn’t be reaching for goroutines at all.
The change that has really modified the state of the concurrency debate is container orchestration like Kubernetes or managed orchestrators like Google Cloud Run. Scaling horizontally is almost certainly cheaper in the long run than the wasted engineering hours debugging some concurrency failure (Go’s TSAN does help with this a lot but it’s non-deterministic so you do have to hope for the best there).
TL;DR don’t reach for Go because of goroutines, reach for Go because it is good at solving some other kind of problem you have.
Scaling horizontally, aka “throwing more servers at the problem”, isn’t applicable in a lot of domains … like client-side or embedded software, or anything else not involving a server.
Go has good safety, but the NIH attitude of its runtime (aka “let’s pretend we’re running on Plan 9”)
It’s not NIH if they are reimplementing the standard APIs from an existing operating system (one that was literally inventing in the same place by the same people as UNIX). Use your words instead of throwing initialisms around.
I said NIH because Not-Invented-At-Bell-Labs takes longer to type.
It’s not the APIs I’m referring to, rather the ABI that make Go kind of an alien in the OS. It makes normal debugging/profiling tools kind of useless since they can’t even get a stack trace, and it creates overhead and complexity when calling between Go and anything else.
Swift has a big standard library, and runtime code for managing object ref-counts. But so does Rust (viz. the RC and ARC classes.)
I’m not denying Rust is currently better positioned for working in embedded environments. But I don’t think the difference is as stark as you say.
Ah, TK1: that weird generation with both a 32-bit and a 64-bit variant :)
Hopefully this gets to the level of Linux on Pixel C which even has nouveau working..
Interesting. The link you mentioned seems to have an old Linux kernel version (over 4 years ago). Does it work from mainline?
I’d assume bcrypt and scrypt, which with most implementations setting good input costs per default or as a lower bound (and higher depending on CPU speed). Both bcrypt and scrypt have memory requirements in addition to CPU requirements, making it more costly to use certain hardware such as ASICs and GPUs.
No, bcrypt/scrypt/etc are still fundamentally solving a different problem, and would essentially just be a PBKDF if used as I think you’re suggesting. Obviously using either of these options would be superior to not doing so, but the actual secure solution here is policy-gating via HSM.
the only problem is that the HSM is something you can physically lose, and a passphrase is in your brain forever (modulo amnesia…)
with how Apple/Google sync FIDO2 passkeys between devices, it is a multi-device system that gets the same keys decryptable by multiple HSMs, but (I’m not sure which option they picked tbh, probably the first one?) such a system either is completely non-recoverable if you lose all devices simultaneously, or is doing “normally” (non-HSM) encrypted “cloud” backup.
the only problem is that the HSM is something you can physically lose, and a passphrase is in your brain forever (modulo amnesia…)
If you are a company providing a service like last pass, you should not be in a position to lose the HSM
with how Apple/Google sync FIDO2 passkeys between devices, it is a multi-device system that gets the same keys decryptable by multiple HSMs
I can’t speak for how google’s passkey syncing works, but I would assume/hope the same as what I’m about to say. Apple’s works over the synchronized keychain mechanism, which is fully end-to-end encrypted with actual random keys, not HSM based (we’ll circle back in a bit). When you add a new device to your apple account, that device has to be approved by one of your other existing devices, and it is that approval that results in your existing device wrapping the account key material to the new device’s keys and sending those wrapped keys to the new device. Once the new device gets that packet it can decrypt the remainder of the keychain material. Each device keeps its own private keys and the account key material protected by the local secure environment.
Note that even the old non-e2e encrypted iCloud backups did not backup keychain material, so compromising the backup infrastructure would not provide access to passwords, passkeys, etc. The concern of course is that for many governments/organisations trawling your back ups is pretty much all that’s wanted, as it just means they have to wait for a backup to happen rather than being able to decrypt in real time. Happily e2e for everything is now an option for apple’s cloud services.
Historically losing your account password (and so resetting the account password is required) would as a byproduct mean losing your synced keychain, so if you didn’t have them locally the data is gone. There is a final ditch backup called something like “iCloud Key Vault” or some such which is the marketing name for large scale and robust HSM setups required given the data being protected. These are policy gated HSMs that devices can back up some core key material to (Ivan Krstic has a blackhat talk from a few years ago that goes over them, but essentially you take a bunch of hsms, get them to all synchronize with each other, then blend the admin cards and have them all roll their internal keys so there is no way to install new software, rely on previously recorded key material, or install compromised hardware into an existing vault).
a company providing a service like last pass, you should not be in a position to lose the HSM
Oh… you weren’t talking about having the HSM local to the user?? Server side HSM doesn’t seem to make sense to me for a password manager where decryption MUST happen on the client?
There are two levels:
Recovery path - this is an HSM + policy system where the user key material is protected by HSM policy. This is dependent on the HSMs being configured to ensure that the HSM owner does not have access to the HSM’s key material. This is why we talk about an HSM’s security model having to include physical access to the HSM.
Protecting user data: PBKDFs are weak due to generally terrible user provided entropy, so what you do is you receive the user’s data encrypted by the user’s relatively poor entropy. Rather than just storing that, you ask your HSMs to encrypt it with an actual key gated by policy on something like the user’s account password.
The recovery path is obviously optional, but the latter is needed to defend against “hackers downloaded all our user data and that data is protected only by relatively weak entropy”.
The ideal case is a user having multiple devices, and then having new devices receive decryption keys from the existing ones. That means the data that gets uploaded to the servers for syncing are always encrypted with a true random key, and the concept of a “master key” ceases to be relevant.
I’m not suggesting anything. I merely pointed out what I think the person responding probably referred to.
The correct thing to do is to use the password + hsm to policy gate access to the encryption keys, This is how modern devices protect your data.
Your passcode (phone), or password (decent computer/hardcore phone :D), includes an HSM that google calls a hardware backed keystore, and apple calls a Secure Enclave (there’s also the similarly named “Secure Element”, but this is actually another coprocessor that runs a cut down JVM for payments :D).
Anyway, in all implementations the HSMs use internal [generally to the cpu itself] keys. These keys are then used to encrypt all data being stored via the HSM. Retrieving the data is done by providing credentials (your password, etc) to the HSM, the HSM then policy gates access, for example the HSM itself counts attempts and enforces time outs. Because the HSM is performing this gating itself, it doesn’t matter how much cpu power the attacker has: there’s no precomputation, hashing, etc they can do, and having access to the HSM-encrypted data is not brute forceable because the HSM is encrypting with a true random key, not something derived from some kind of guessable password.
If LastPass folk had done this, then downloading the data would have been useless, and a fully local compromise would have still not been able get raw data as the attacker would still be forced to ask the HSM for data by providing username+password combos, and so be subject to the same attempt count and timeout restrictions of a non-local attacker.
Any open source or cheap ham out there?
You really want to avoid cheap ham as It may have parasites :D (Sorry, I recognize the suffering of autocorrect vs. “hsm” :D)
There are two aspects to a commercial HSM (vs say a yubikey):
The first is the software. For this what you want is a very small, very simple OS as an HSM is something where the trade off between entirely verifiable software vs. extra features (you don’t want any software on an HSM that isn’t directly tied to the functions the HSM provides).
Next there’s the hardware. Now this is where things get hard, as an HSM is expected to be secure against a person with physical access, so you have both the electronic design to be aware of, as well as the physical design. Even if someone does have an open source design, the actual manufacture is expensive.- many HSM chips are hardened at a silicon level, with layout and patterning stuff such that the even decapping the chip and then using an electron microscope does not expose the on die data. Suffice to say this means you can’t use an fpga or some generic asic manufacturing, which ramps up the price.
The HSMs are then generally wrapped in many layers of plate steel, etc that can be adhered to various parts of the board so that removing the plates also breaks things (for example cracking various dies, etc).
While writing this I discovered that yubico have started making an “affordable” hsm product at only $650, or $950 with fips certification, which looks like it fulfills the core cryptographic primitives and you’d only have to manage the storage of secured data.
the approach of having your parser written in a DSL inside a string is so jarringly modern to see used in C.
But then what’s jarring in the other direction is how the lexer uses some behind-the-scenes global state instead of an explicit instance: clexRegisterKind("return", RETURN)
:/
good: it’s extremely clean, it’s an actual proper language designed top-down for the problem of describing builds, it contains tons of support for all the common things “unixy” projects need, it’s just a joy to use 99% of the time
frustrating: occasionally the inflexibility hits you, specifically issue #2320 is the one I’m familiar with
What is good about Meson?
What is frustrating about Meson?
Adding to the others mentioned so far, good:
--enable-this-
–enable-that` (autconf) interfaces.Frustrating:
Overall, though, it is by far the least bad build system out there in my opinion. It has its issues, but doesn’t have any deeply-baked “philosophical” issues that I see as particularly problematic, so I feel pretty secure in investing in it.
Because of the specification language or other aspects? How about GN? Have you an example of a build system with little “cognitive load”?
There’s a bunch of “magic” and especially in build systems I dislike magic. I tend to do weird enough stuff that I end up having to care what’s actually going on. I like make because it’s just a way of stringing together rules for building inputs into outputs. I like ninja because it’s like make but stupider. I don’t mind GN because it generates ninja so you can work out what it’s actually doing, though in $DAYJOB our GN is too complex.
Interesting. So then it’s not actually because of the language, but more because there are too much features to fully comprehend and control. A major difference to MAKE is that you have to specify the details of the process, whereas in Bazel, GN and the like you rather specify the goal, on a higher level. Ninja is essentially a subset of MAKE and not intended to be manually written. It’s interesting that you consider Ninja easier to understand than GN. I came to the conclusion that systems like Bazel or GN are difficult to understand because they are fully dynamic languages where you actually have to run it to understand what happens; that’s why I implemented a statically typed language in BUSY which is amenable to static analysis; so no surprises which you only see at runtime.
It’s probably largely having used bazel with large, existing, complex, slightly buggy bazel rule sets, while having used simpler GN build systems and contributed to their complexity and bugs myself.
Not in any serious way. For me anyway, mainly releasing C and C++ source code intended to be built by a diverse range of people on an impossibly diverse set of platforms, any weird/esoteric/unknown build system is a huge liability in many ways.
That said, specifically to Bazel: I don’t do Java, and investing in anything internal from Google is a fool’s errand. gyp
, then gn
, then bazel
… probably even more I haven’t heard of. I know internal engineer wanking when I see it. To be fair, they seem to be treating bazel a bit more seriously and it’s not egregiously half-baked like its predecessors, but still.
I know nothing of Pants other than that it’s clearly not for me.
Good (compared to CMake):
Inflexible (compared to CMake):
For better or for worse, the language itself is not extensible like CMake. Of course, it has custom targets, so it’s not that: There is not a thing you can’t do at build time. But the language itself is not turing complete. You have to add your missing feature to Meson itself. Maybe great in the long run if you can upstream it; not so much if it’s just for internal use. It also means that Meson accumulates features fast – good luck rewriting it in Rust.
WebAssembly in more places.
AI fears.
I hope zig starts beating rust as a systems language this year.
I don’t understand under what circumstances I would chose zig if I had already learned Rust, so I see them as competitors.
Zig’s comptime metaprogramming is very competitive with Rust’s const eval and macros, but feels simpler to write to me. I think once Zig hits 1.0 (sometime in 2025?) and there is a larger ecosystem for it, it will be more compelling for people starting new projects to use it. I know I’d be using it more if there was a good selection of math / statistics packages available. @andrewrk has had a few ideas for memory safety that have yet to be implemented / tried out, and I think if there are real safety options it will give Zig a huge opportunity to grab up market share.
This is probably adding on the competition bit: I know Rust and I am looking at Zig. I doubt my ability to write good, fast code in a language that’s as huge as Rust. I also feel that “knowing” Rust isn’t something you do passively, it’s basically a part-time job. It’s not one that I find particularly rewarding, as language design is neither a hobby of mine, nor something I’m professionally interested in, and it takes up a lot of time that I would much rather spend writing useful programs.
On the other hand I doubt my ability to write correct, non-leaky code in a language as hands-off as Zig… For something small and simple, sure, but anything with decent complexity is bound to end in memory management mistakes I think
Oh, zig is fantastic at telling you if you leak addresses. The equivalent of valgrind is baked into the tooling.
I also feel that “knowing” Rust isn’t something you do passively, it’s basically a part-time job.
Funny enough I’ve had this same feeling about C++. Conversely, keeping up with C#, Java, and even C doesn’t feel so mentally taxing.
Oh, yeah, I wanted to say “just like C++” but I thought that was going to be a little too inflamatory and I have C++ PTSD from my last big C++ project. It’s driving me nuts. You would expect a language that has so much more expressive power than C, and can better encode so much safer idioms, to have less churn than C, not more.
IMHO this is mostly a failure of the C++ commitee though. The complexity of the language and standard library, and the way it was (mis)managed, has spawned a huge, self-feeding machine of evangelists, consultants, language enthusiasts and experts, and a very unpleasant language feature hustle culture. I’ve seen a lot of good, clean, smart, super efficient C++ code, and most of it appears to have been made possible by a) a thorough knowledge of compiler idioms and b) ignoring this machine. Unfortunately, the latter is hard to do unless it’s organizationally enforced.
Nah, Zig needs to at least do 1.0 (well, as an alternative, Rust can do 2.0) to start to dream about outcompeting Rust :P
WebAssembly in more places.
Came here to post exactly this!
I think it won’t quite break through to mainstream mainstream but I think the reasons and needs will slowly start to become more apparent. In a world where Google Chrome is the dominant operating system and more stuff is running “at the edge” and the unit of computation is becoming smaller and simpler on the surface, I think WASM and WASI has a strong competitive head start to solving some of these problems.
It won’t quite be the “write once run anywhere” of the JVM era, but I reckon it has a pretty fair shot at getting close enough to be useful!
I’ve definitely had similar situations with C++, where MSVC and GCC were as useless as human brains while clang instantly made it obvious what the error was.
What you actually want if you really want a “C style” linked list, esp. in an embedded setting, is intrusive-collections
Can one claim that the three finger salute holds up better against XScreensaver and this bug? Does the fact that Windows is natively graphical play to its advantage?
I don’t know if it holds up better, but the screensaver is on a separate desktop than the default desktop is on a separate desktop than the login screen. These desktops are different than X11 style virtual desktops, and are a sort of security boundary. The screensaver simply crashing will not lead to the desktop being switched IIUC, and calling SwitchDesktop is guarded against from those secured desktops.
This seems similar to having the lock screen be on a different VT, say directly in the login manager – I think gdm should work like that these days…
I think it probably would have been fine to implement an API where the service simply requested a random password and then stored it in the existing browser password sync.
Question to lobsters: Would this have been fine? It sounds pretty fine.
No, it does not work. Webauthn, etc isn’t complicated simply for the hell of it.
First, we already have that: every browser supports random password generation, the biggest problem is sites blocking those passwords because of absurd rules - essentially you’re asking for something that browsers already do, except when the site actively breaks secure password use. From a lock in perspective it’s essentially the same as webauthn: your logins are tied essentially to one password manager.
Further a simple random key breaks if you ever have a MiTM or XSS hole - because the password is tied to the site, and is unchanging either of these attacks leaks the secret, and the secret can be subsequently reused.
The complexity of webauthn is a baseline requirement for an actual security credential system. The challenge-response handshake ties both ends of the connection together directly, preventing forwarding/proxy attacks (which already happen against existing 2fa systems). Both ends of the handshake verify the domain involved in the handshake, so an error on either end (xss, mitm, phishing, …) can be blocked by at least one party.
This blog post is basically uninformed nonsense that demonstrates a failure to actually understand what they’re complaining about and instead doing “anything I don’t understand must be wrong” approach to security.
Now the whole webauthn/fido/passkey system kind of brought some of this nonsense on itself by pushing “biometric” security so hard. Honestly the only thing that requiring user authentication defends against is a person who has already got physical access to your unlocked device, and even then that doesn’t have to be biometric.
The post gets way better towards the end, it just has some provocative things in the beginning :)
your logins are tied essentially to one password manager
Thing is that those are way more cross-platform than current (Apple and Google) passkey providers. But if password managers will be allowed by the platforms to handle passkeys, that will be solved.
But if password managers will be allowed by the platforms to handle passkeys, that will be solved.
I was about to say that that would be an obvious step [and was what I intended to imply], but then I remembered capitalism :-/
But the more general issue is that if you have an actual HSM you absolutely do not want any way to extra the private key material from it, what you want is to ask the HSM to give you a handle for a given private key, then you ask the hsm to decrypt or sign or whathaveyou and provide the handle and the data to operate over. So any easy migration path runs into that - you can only support easy migration if you’re willing to take a significant security reduction vs what you could theoretically accomplish.
See my other comment — the thing with passkeys is they’re already that security reduction. They’re already “cloud” synced — but only within Apple or Google which is what feels unfair to users.
It’s not entirely unreasonable to argue that allowing third party apps to handle that instead is a further potential security reduction, however it’s also fair to argue for user choice over security in this specifically.
the thing with passkeys is they’re already that security reduction.
The way syncing of secrets on apple hardware works at least does not extract the raw material out of the HSM - the material to be synced is encrypted by the HSMs to keys from the already approved devices.
I thought they might do something like that. But then, does that mean there’s no recovery from losing all devices at once? That’s not great :/
The general security model for apple’s end-to-end encrypted services is that loss of all devices very close to loss of all e2e encrypted data.
Now there is the “cloud key vault” (Matt Green has a good write up: https://blog.cryptographyengineering.com/2016/08/13/is-apples-cloud-key-vault-crypto/) which can recover enough info to recover normal encrypted iCloud data, but I’m not sure whether it contains info that can be used to recover synced keychain items (even just basic passwords).
It is quite easy to export passwords from one password manager and import then into another. There’s a CSV format that’s generally recognized across the industry.
Personally I think the ability to do that is a baseline requirement of any sort of auth system. If your auth system precludes that, so much the worse for it.
Yup, and it’s also quite easy to leak passwords because of that.
Importantly however the “protect user secrets at all costs” isn’t a requirement of webauthn, so there’s nothing stopping an implementation from providing such an export functionality. Assuming that they believe that the reduction in security vs usability trade off warrants it.
If it can be MITM’d once, than you’re screwed. With private keys, that wouldn’t work.
Also, for what it’s worth, the current JS API for webauthn evolved from a proposal for just the API being talked. The possibility for it is still there, but AFAIK every browser has moved on to only doing webauthn through it.