This HN comment explains what’s wrong with the proof. Having read the paper, I agree with the analysis:
Unsurprisingly, there’s nothing here. Most of the paper describes a brute force search across all possible variable assignments in the form of a graph (with some pointless polynomial improvements like making it a trie), where you build a path of vertices representing each set of truth values that satisfies a given expression. This clearly has exponential size, which the author alludes to in the “improvements” section by noting it does “redundant work”. This is addressed by collapsing the exponential graph down to have only one vertex for each variable*expression pair (if you ignore the trie) and adding exponentially many labels for the different paths to reach a given vertex. (incidentally, to the extent that it’s described clearly, it seems like the improved layered graph would basically be the same as the original non-layered graph)
The final complexity discussion uses the graph size constraint gained by the “improvement” but doesn’t consider how to handle the extra labeling meaningfully. Basically, the pre- and post-improvement algorithms put the exponential work in different spots, and the sloppiness of the algorithm description (I mean, really, why tell us you’re using a stack for BFS and then have “determine the subset of satisfied constraints” as a step) makes it easy to ignore.
I’m also being a little generous with the algorithm itself. As described, some of the trie optimizations seem to make certain combinations of satisfied expressions impossible to notice, but I think it’s not a big deal to make this part work. The properties of the trie structure (and of sorting the variables by occurrence, for that matter) don’t seem to be used.
What is this hex0 program that they are talking about? I don’t understand how that is the starting point, could someone expand?
The program is here: https://github.com/oriansj/bootstrap-seeds/blob/master/POSIX/x86/hex0_x86.hex0
It’s a program that reads ASCII hex bytes from one file and outputs their binary form to the second file.
Yeah, I think this is pretty confusing unless you’re already very guix-savvy; it claims to be fully bootstrapped from source, but then in the middle of the article it says:
There are still some daunting tasks ahead. For example, what about the Linux kernel?
So what it is that was bootstrapped if it doesn’t include Linux? Is this a feature that only works for like … Hurd users or something?
They bootstrapped the userspace only, and with the caveat that the bootstrap is driven by Guix itself, which requires a Guile binary much larger than the bootstrap seeds, and there are still many escape hatches used for stuff like GHC.
reading the hex0
thing, it looks like this means that if you are on a Linux system, then you could build all of your packages with this bootstrapped thing, and you … basically just need to show up with an assembler for this hex0
file?
One thing about this is that hex0
calls out to a syscall to open()
a file. Ultimately in a bootstrappable system you still likely have some sort of spec around file reading/writing that needs to be conformed to, and likely drivers to do it. There’s no magic to cross the gap of system drivers IMO
Hex0 is a language specification (like brainf#ck but more useful)
no, you don’t even need an assembler.
hex0.hex0 is and example of a self-hosting hex0 implementation.
hex0 can be approximated with: sed ‘s/[;#].*$//g’ $input_file | xxd -r -p > $output_file
there are versions written in C, assembly, various shells and as it is only 255bytes it is something that can be hand toggled into memory or created directly in several text editors or even via BootOS.
It exists for POSIX, UEFI, DOS, BIOS and bare metal.
I have no existing insight, but it looks like https://bootstrapping.miraheze.org/wiki/Stage0 at least tries to shed some light on this :)
I had a look around to see how Tree Borrows relates to Stacked Borrows. Looks like there are some problems with Stacked Borrows that Tree Borrows aims to fix. Tree Borrows comes from Ralf Jung’s team; Ralf developed Stacked Borrows.
I haven’t found a concise outline of the problems with Stacked Borrows and how Tree Borrows addresses them.
The main sales pitch, at least for me, is the copy_nonoverlapping example. This fixes a particular nasty problem with stacked borrows (the best example I know is this: https://github.com/rust-lang/rust/issues/60847#issuecomment-492558787). I personally hit that more or less every time when I write non-trivial unsafe
code.
Having read the stacked borrows paper, my main beef with them is that the rules for raw pointers are hard to think about. Said paper pointed out that a tree-based model would probably work better here, and I am happy to see this research being done.
As raggi points out on the orange website:
It could be worse, the user has the old rsa host key present alongside newer ed / ecdsa keys, they may never rotate out the rsa one. A future mitm simply only advertises the rsa key, and mitm passes.
Users will need to actively remove the old rsa key in order to be safe.
Okay, I tested this and on a new enough OpenSSH client, the RSA key gets replaced using the mechanism described here: https://lwn.net/Articles/637156/ (if you connect using a key other than RSA).
But this is also a very bad thing in the other direction. If you are MITM’d first, they can “update” the other keys if you connect with RSA first, right?
I mean, it doesn’t really make any difference? The only situation where this makes any difference is if the user takes no action to update the keys manually, and in that case the MITM will continue as long as the man is in the middle, whether this mechanism exists or not. And then once you connect to actual GitHub, the fact that you got MITMed will be more noticeable.
I would hesitate to call the implementation of PhantomData
pristine – it’s a lang item, which means it gets special-cased in the compiler, very much unlike std::mem::drop
.
I was curious, so I looked up where the special case happens:
Really enjoying this article for its historical analysis of the earlier research languages that inspired Rust’s ownership & borrowing rules. Also there are some great quotes from Rust users, like
“Learning Rust Ownership is like navigating a maze where the walls are made of asbestos and frustration, and the maze has no exit, and every time you hit a dead end you get an aneurysm and die.”
and
“I can teach the three rules [of Ownership] in a single lecture to a room of undergrads. But the vagaries of the borrow checker still trip me up every time I use Rust!”
But the really interesting quote, to me, is
They randomly assigned students to two groups, one having to complete the assignment using the Rust standard library data types, and one using a garbage-collected wrapper type (called “Bronze”) which enabled a number of additional aliasing patterns to pass the borrow-checker, thus removing the needs for more complex aliasing patterns and datatypes.
They found a significant difference in the rate of completion and the self-reported time to completing the assignment. The students who used Bronze on average took only a third as much time as the control group, and were approximately 2.44 times more likely to complete the assignment.
This reflects my own opinion that I would prefer some lifetime “violations” to be addressed by using ref-counting or GC, rather than turning them into errors. An example of this is the way Go will allocate structs on the stack when possible but promote them to GC objects when escape analysis shows that would cause use-after-return bugs. Or the way Lobster/Swift/ObjC/Nim use ref-counting but are able to eliminate many of the retain/release operations based on static analysis. Essentially this turns a lifetime-checker into a performance optimization tool, not a gatekeeper from being able to build at all.
Lifetimes aren’t just about memory management. Iterator invalidation is also a common category of bugs that the borrow checker prevents. Pretty sure the thread safety guarantees also make use of it, to make sure you don’t have any lingering references to data owned by a mutex once you lock it.
Performance isn’t the only part of Rust’s value proposition, and it’s the easiest one to catch up with.
I presume that in the context of this experiment they would write some iterator implementations which don’t become invalid. In a single threaded context this should almost always be possible with some slowdown.
e.g. for a Vec analogue, your iterator keeps an Rc pointer to the Vec, and a current index, and it re checks that the index is still in range on every call to get the next item.
e.g. for a btree analogue, the iterator stores the last retrieved key and the call to get the next item asks the underlying btree for the first item > the last retrived key. Makes iterating a btree O(n*log(n)) instead of O(n) which is a little bit dramatic.
This is kind-of what Objective-C fast enumeration does and it isn’t great for usability. Code will now throw an exception on mutation of the underlying collection and you can’t statically verify that it won’t. Checking that the index is in bounds isn’t sufficient, this can lead to TOCTOU bugs if you check a property of the current element, insert something before it, and then operate on a different object by mistake.
I’ve wondered what an “easy mode” rust would look like.
I don’t feel like cloning and the borrow rules were my main barriers though. If you don’t care about allocations you can approximate a GC-ish approach by cloning all over the place.
The problem with making everything mutable and cloning everywhere all the time is that you lose guarantees. Part of the feedback that I like about rust (after having used it for awhile) is that my type signatures for functions and structs help inform me about the impacts of my design decisions.
Right — I said recently in some other thread that once you start using Rust’s escape hatches like Rc you lose the compile-time checking, which is such an important part of the value proposition.
But if we want GC or RC everywhere we have that already and don’t need rust? Rust is for when you want the control to decide for yourself when that is ok or if it is not
You’d still have a choice — the compiler would only use RC if it couldn’t determine the object’s lifetime statically. There could be an optional compiler flag that raises a warning (or error) when this happens, exactly like those verbose borrow-checker warnings, if you don’t want that to occur.
Or alternatively, the warning/error is on by default but there’s a notation you put in the source code at that spot to say “it’s OK to promote this to RC here if necessary.” (A simple notation that doesn’t require you to change the object’s type or calling convention everywhere!)
This is hand-wavey and maybe impossible, but it’s Sunday morning and a boy can dream.
I am pretty sure that it is not merely possible, but already done 25..30 years ago in Jeffery Mark Suskind’s Stalin - IIRC, his fall back was Boehm-Wiser not RC. (EDIT: His early impls were in the very early 90s as a concept fork from the Dylan effort; “Sta-lin” sort of referenced “static language” or “brutally optimizing” or etc.)
For the annotation version, I guess “just” making Arc<T> more compatible with &mut T basically is this?
I don’t see why one would ever need to get an encrypted message that they are unable to decrypt, while getting ensured that it got encrypted correctly. The article desperately needs a motivation section.
I agree that it’s difficult to think about concrete uses cases, but we do have them! One small correction to your message: you do need to be able to decrypt the message but you don’t want to do that unless it’s an emergency. For example you don’t want to decrypt it since it can be a potential security problem. At the same time you want to be sure that the encryption was correctly done so that when you need to access the message you know it’s there waiting for you.
We will share them it in the future, we need to get approval from one customer.
Does the verifier get the prover’s DH share? If not, that then becomes the new key - what happens to that? Otherwise, the verifier is just choosing not to perform the decryption at the moment, and I don’t see how that gives you any security.
Then what guarantees does anyone have that if you decrypted the message, it was indeed an emergency? I don’t see the point.
The important part is to make sure that the message is there when you need it. Opening it if it’s not an emergency isn’t an issue but you don’t gain anything doing that. What you want is to verify that the message was encrypted so that you can open when needed.
As with most cryptographic primitives, they don’t have many uses cases at the beginning until somebody starts using it and shows why they are useful. We already have two uses cases where this will be used in a product. One for a client and another for a product we’re building.
I’d assume bcrypt and scrypt, which with most implementations setting good input costs per default or as a lower bound (and higher depending on CPU speed). Both bcrypt and scrypt have memory requirements in addition to CPU requirements, making it more costly to use certain hardware such as ASICs and GPUs.
No, bcrypt/scrypt/etc are still fundamentally solving a different problem, and would essentially just be a PBKDF if used as I think you’re suggesting. Obviously using either of these options would be superior to not doing so, but the actual secure solution here is policy-gating via HSM.
the only problem is that the HSM is something you can physically lose, and a passphrase is in your brain forever (modulo amnesia…)
with how Apple/Google sync FIDO2 passkeys between devices, it is a multi-device system that gets the same keys decryptable by multiple HSMs, but (I’m not sure which option they picked tbh, probably the first one?) such a system either is completely non-recoverable if you lose all devices simultaneously, or is doing “normally” (non-HSM) encrypted “cloud” backup.
the only problem is that the HSM is something you can physically lose, and a passphrase is in your brain forever (modulo amnesia…)
If you are a company providing a service like last pass, you should not be in a position to lose the HSM
with how Apple/Google sync FIDO2 passkeys between devices, it is a multi-device system that gets the same keys decryptable by multiple HSMs
I can’t speak for how google’s passkey syncing works, but I would assume/hope the same as what I’m about to say. Apple’s works over the synchronized keychain mechanism, which is fully end-to-end encrypted with actual random keys, not HSM based (we’ll circle back in a bit). When you add a new device to your apple account, that device has to be approved by one of your other existing devices, and it is that approval that results in your existing device wrapping the account key material to the new device’s keys and sending those wrapped keys to the new device. Once the new device gets that packet it can decrypt the remainder of the keychain material. Each device keeps its own private keys and the account key material protected by the local secure environment.
Note that even the old non-e2e encrypted iCloud backups did not backup keychain material, so compromising the backup infrastructure would not provide access to passwords, passkeys, etc. The concern of course is that for many governments/organisations trawling your back ups is pretty much all that’s wanted, as it just means they have to wait for a backup to happen rather than being able to decrypt in real time. Happily e2e for everything is now an option for apple’s cloud services.
Historically losing your account password (and so resetting the account password is required) would as a byproduct mean losing your synced keychain, so if you didn’t have them locally the data is gone. There is a final ditch backup called something like “iCloud Key Vault” or some such which is the marketing name for large scale and robust HSM setups required given the data being protected. These are policy gated HSMs that devices can back up some core key material to (Ivan Krstic has a blackhat talk from a few years ago that goes over them, but essentially you take a bunch of hsms, get them to all synchronize with each other, then blend the admin cards and have them all roll their internal keys so there is no way to install new software, rely on previously recorded key material, or install compromised hardware into an existing vault).
a company providing a service like last pass, you should not be in a position to lose the HSM
Oh… you weren’t talking about having the HSM local to the user?? Server side HSM doesn’t seem to make sense to me for a password manager where decryption MUST happen on the client?
There are two levels:
Recovery path - this is an HSM + policy system where the user key material is protected by HSM policy. This is dependent on the HSMs being configured to ensure that the HSM owner does not have access to the HSM’s key material. This is why we talk about an HSM’s security model having to include physical access to the HSM.
Protecting user data: PBKDFs are weak due to generally terrible user provided entropy, so what you do is you receive the user’s data encrypted by the user’s relatively poor entropy. Rather than just storing that, you ask your HSMs to encrypt it with an actual key gated by policy on something like the user’s account password.
The recovery path is obviously optional, but the latter is needed to defend against “hackers downloaded all our user data and that data is protected only by relatively weak entropy”.
The ideal case is a user having multiple devices, and then having new devices receive decryption keys from the existing ones. That means the data that gets uploaded to the servers for syncing are always encrypted with a true random key, and the concept of a “master key” ceases to be relevant.
I’m not suggesting anything. I merely pointed out what I think the person responding probably referred to.
The correct thing to do is to use the password + hsm to policy gate access to the encryption keys, This is how modern devices protect your data.
Your passcode (phone), or password (decent computer/hardcore phone :D), includes an HSM that google calls a hardware backed keystore, and apple calls a Secure Enclave (there’s also the similarly named “Secure Element”, but this is actually another coprocessor that runs a cut down JVM for payments :D).
Anyway, in all implementations the HSMs use internal [generally to the cpu itself] keys. These keys are then used to encrypt all data being stored via the HSM. Retrieving the data is done by providing credentials (your password, etc) to the HSM, the HSM then policy gates access, for example the HSM itself counts attempts and enforces time outs. Because the HSM is performing this gating itself, it doesn’t matter how much cpu power the attacker has: there’s no precomputation, hashing, etc they can do, and having access to the HSM-encrypted data is not brute forceable because the HSM is encrypting with a true random key, not something derived from some kind of guessable password.
If LastPass folk had done this, then downloading the data would have been useless, and a fully local compromise would have still not been able get raw data as the attacker would still be forced to ask the HSM for data by providing username+password combos, and so be subject to the same attempt count and timeout restrictions of a non-local attacker.
Any open source or cheap ham out there?
You really want to avoid cheap ham as It may have parasites :D (Sorry, I recognize the suffering of autocorrect vs. “hsm” :D)
There are two aspects to a commercial HSM (vs say a yubikey):
The first is the software. For this what you want is a very small, very simple OS as an HSM is something where the trade off between entirely verifiable software vs. extra features (you don’t want any software on an HSM that isn’t directly tied to the functions the HSM provides).
Next there’s the hardware. Now this is where things get hard, as an HSM is expected to be secure against a person with physical access, so you have both the electronic design to be aware of, as well as the physical design. Even if someone does have an open source design, the actual manufacture is expensive.- many HSM chips are hardened at a silicon level, with layout and patterning stuff such that the even decapping the chip and then using an electron microscope does not expose the on die data. Suffice to say this means you can’t use an fpga or some generic asic manufacturing, which ramps up the price.
The HSMs are then generally wrapped in many layers of plate steel, etc that can be adhered to various parts of the board so that removing the plates also breaks things (for example cracking various dies, etc).
While writing this I discovered that yubico have started making an “affordable” hsm product at only $650, or $950 with fips certification, which looks like it fulfills the core cryptographic primitives and you’d only have to manage the storage of secured data.
The mention of Intel ME including a JVM got me curious, and following the links seems to lead to some kind of an incomplete archived copy of the relevant website – only the first two slides can be seen.
Still to this day, my favorite pattern for UUIDs is to serialize a small amount of relevant metadata (enrollment year, name, student id, in this case) and encrypt / pad that byte string.
It looks like any-ole uuid at first glance, you can pass it around safely, and you can retrieve that data quickly without DB round trips or regex. Great for request validation too!
It still suffers from the ‘embedded logic’ argument, but I feel like changing uuids is a no-no anyway.
If any of that changes for any reason, your ID has to change if you rely on it anywhere.
I can see fat-fingered entry of all of those fields causing you heartache.
Be careful about trusting data like year
read from this, because encrypted data can still be manipulated. In UUID you most likely won’t have enough bits left to add a proper HMAC.
Ah, yes, printf(3)
, the well-known OCaml function.
The quality of this explanation is stunning. It is the first time I’ve seen FFT explained such that the complex roots of unity are an obvious solution, and not some convoluted math with no clear origin or purpose.
I know video submissions are not as attractive as articles, but I highly encourage you to check this one out.
I haven’t watched the video yet, but I’ve saved it in my Watch Later list.
Another video that I liked regarding FFT - https://www.youtube.com/watch?v=spUNpyF58BY
Ah, I’ve seen this one. Always good stuff from 3blue1brown. Though that’s about the Fourier transform in general, from a mathematical perspective.
I feel pretty lucky that from very early on we were pretty far from this at $WORK. But we do still have “operational stuff that gets pretty dangerous”, and it’s all kinda scary (like most toil honestly).
One recent strategy has been to build runscripts based off “do-nothing scripts” (see this post).
It establishes all the steps, allows for admonitions, and establishess ways to provide good automation.
One thing that has been pretty nice is hooking this up with our cloud provider CLI tooling to allow for more targetted workflows (much more assurances that you are actually in prod or in staging etc), and then turning our “do-nothing script” to a proper runbook that just does the thing that needs to happen.
One trick here for people who have dog-slow CI or the like, is to have this in a repo that is more ammenable to fast git merges. That + having a staging environment that is allowed to be broken temporarily gets rid of a lot of excuses. Along with giving people the space to actually do this kind of work!
If you have anything outside your own project that relies on the staging environment, you will also need some other environment that is not allowed to break.
Working as a front end developer against a staging environment that is always breaking is really bad for morale.
This sshd got started inside the “doubly niced” environment
As for why “the processes didn’t notice and then undo the nice/ionice values”, think about it. Everyone assumes they’re going to get started at the usual baseline/default values. Nobody ever expects that they might get started down in the gutter and have to ratchet themselves back out of it. Why would they even think about that?
These days, this should stand out as a red flag – all these little scripts should be idempotent.
You shouldn’t write scripts where if you Ctrl-C them, and then re-run it, you’ll get these “doubling” effects.
Otherwise if the machine goes down in the middle, or you Ctrl-C, you are left with something that’s very expensive to clean up correctly. Writing Idempotent scripts avoids that – and that’s something that’s possible with shell but not necessarily easy.
As far as I can tell, idempotence captures all fhte benefits of being “declarative”. The script should specify the final state, not just a bunch of steps that start from some presumed state – which may or may not be the one you’re in!
I guess there is not a lot of good documentation about this, but here is one resource I found: https://arslan.io/2019/07/03/how-to-write-idempotent-bash-scripts/
Here’s another one: https://github.com/metaist/idempotent-bash
I believe the “doubly niced” refers to “both ionice and nice”. There wasn’t any single thing being done twice by accident. The issue is with processes inheriting the settings due to Unix semantics.
The problem is the API - it increments the nice value rather than setting it. From the man page:
The nice() function shall add the value of incr to the nice value of the calling process.
So the nice value did end up bigger than desired.
That is an interesting quirk of nice()/renice, but in this case I believe they explicitly stated they originally set the nice value to 19, which is the maximum.
Ah yeah you could be right …
But still the second quote talks about something related to idempotence. It talks about assuming you’re in a certain state, and then running a script, but you weren’t actually in that state. Idempotence addresses that problem. It basically means you will be in the same finishing state no matter what the starting state is. The state will be “fixed” rather than “mutated”.
Hmm, I still don’t think this is the case. The state being dealt with is entirely implicit, and the script in question doesn’t do anything with nice values at all, and yet still should be concerned about them.
I find the complaints about Go sort of tedious. What is the difference between using go vet to statically catch errors and using the compiler to statically catch errors? For some reason, the author finds the first unacceptable but the second laudable, but practically speaking, why would I care? I write Go programs by stubbing stuff to the point that I can write a test, and then the tests automatically invoke both the compiler and go vet. Whether an error is caught by the one or the other is of theoretical interest only.
Also, the premise of the article is that the compiler rejecting programs is good, but then the author complains that the compiler rejects programs that confuse uint64 with int.
In general, the article is good and informative, but the anti-Go commentary is pretty tedious. The author is actually fairly kind to JavaScript (which is good!), but doesn’t have the same sense of “these design decisions make sense for a particular niche” when it comes to Go.
What is the difference between using go vet to statically catch errors and using the compiler to statically catch errors?
A big part of our recommendation of Rust over modern C++ for security boiled down to one simple thing: it is incredibly easy to persuade developers to not commit (or, failing that, to quickly revert) code that does not compile. It is much harder to persuade them to not commit code where static analysis tooling tells them is wrong. It’s easy for a programmer to say ‘this is a false positive, I’m just going to silence the warning’, it’s very difficult to patch the compiler to accept code that doesn’t type check.
What is the difference between using go vet to statically catch errors and using the compiler to statically catch errors?
One is optional, the other one is in your face. It’s similar to C situation. You have asan, ubsan, valgrind, fuzzers, libcheck, pvs and many other things which raise the quality is C code significantly when used on every compilation or even commit. Yet, if I choose a C project at random, I’d bet none of those are used. We’ll be lucky if there are any tests as well.
Being an optional addition that you need to spend time to engage with makes a huge difference in how often the tool is used. Even if it’s just one command.
(According to the docs only a subset of the vet suite is used when running “go test”, not all of them - “high-confidence subset”)
When go vet
automatically runs on go test
, it’s hard to call it optional. I don’t even know how to turn if off unless I dig into the documentation, and I’ve been doing Go for 12+ years now. Technically gofmt
is optional too, yet it’s as pervasive as it can be in the Go ecosystem. Tooling ergonomics and conventions matter, as well as first party (go vet) vs 3rd party tooling (valgrind).
That means people who don’t have tests need to run it explicitly. I know we should have tests - but many projects don’t and that means they have to run vet explicitly and in practice they just miss out on the warnings.
Even in projects where I don’t have tests, I still run go test ./...
when I want to check if the code compiles. If I used go build
I would have an executable that I would need to throw away. Being lazy, I do go test
instead.
Separating the vet checks from the compilation procedure exempts those checks from Go’s compatibility promise, so they could evolve over time without breaking compilation of existing code. New vet checks have been introduced in almost every Go release.
Compiler warnings are handy when you’re compiling a program on your own computer. But when you’re developing a more complex project, the compilation is more likely to happen in a remote CI environment and making sure that all the warnings are bubbled up is tedious and in practice usually overlooked. It is thus much simpler to just have separate workflows for compilation and (optional) checks. With compiler warnings you can certainly have a workflow that does -Werror
; but once you treat CI to be as important as local development, the separate-workflow design is the simpler one - especially considering that most checks don’t need to perform a full compilation and is much faster that way.
Being an optional addition that you need to spend time to engage with makes a huge difference in how often the tool is used. Even if it’s just one command.
I feel that the Go team cares more about enabling organizational processes, rather than encouraging individual habits. The norm for well-run Go projects is definitely to have vet checks (and likely more optional linting, like staticcheck) as part of CI, so that’s perhaps good enough (for the Go team).
All of this is quite consistent with Go’s design goal of facilitating maintenance of large codebases.
Subjecting warnings to compatibility guarantees is something that C is coming to regret (prior discussion).
And for a language with as… let’s politely call it opinionated a stance as Go, it feels a bit odd to take the approach of “oh yeah, tons of unsafe things you shouldn’t do, oh well, up to you to figure out how to catch them and if you don’t we’ll just say it was your fault for running your project badly”.
The difference is one language brings the auditing into the tooling. In C, it’s all strapped on from outside.
Yeah, “similar” is doing some heavy lifting there. The scale is more like: default - included - separate - missing. But I stand by my position - Rust is more to the left the than Go and that’s a better place to be. The less friction, the more likely people will notice/fix issues.
I’ll be honest, I get this complaint about it being an extra command to run, but I haven’t ever run go vet
explicitly because I use gopls
. Maybe I’m in a small subset going the LSP route, but as far as I can tell gopls
by default has good overlap with go vet
.
But I tend to use LSPs whenever they’re available for the language I’m using. I’ve been pretty impressed with rust-analyzer
too.
On the thing about maps not being goroutine safe, it would be weird for the spec to specify that maps are unsafe. Everything is unsafe except for channels, mutxes, and atomics. It’s the TL;DR at the top of the memory model: https://go.dev/ref/mem
Agreed. Whenever people complain about the Rust community being toxic, this author is who I think they’re referring to. These posts are flame bait and do a disservice to the Rust community. They’re like the tabloid news of programming, focusing on the titillating bits that inflame division.
I don’t know if I would use the word “toxic” which is very loaded, but just to complain a little more :-) this passage:
go log.Println(http.ListenAndServe("localhost:6060", nil))
…
Jeeze, I keep making so many mistakes with such a simple language, I must really be dense or something.
Let’s see… ah! We have to wrap it all in a closure, otherwise it waits for
http.ListenAndServe
to return, so it can then spawnlog.Println
on its own goroutine.go func() { log.Println(http.ListenAndServe("localhost:6060", nil)) }()
There are approximately 10,000 things in Rust that are subtler than this. Yes, it’s an easy mistake to make as a newcomer to Go. No, it doesn’t reflect even the slightest shortcoming in the language. It’s a very simple design: the go
statement takes a function and its arguments. The arguments are evaluated in the current gorountine. Once evaluated, a new goroutine is created with the evaluated parameters passed into the function. Yes, that is slightly subtler than just evaluating the whole line in a new goroutine, but if you think about it for one second, you realize that evaluating the whole line in a new goroutine would be a race condition nightmare and no one would actually want it to work like that.
Like, I get it, it sucks that you made this mistake when you were working in a language you don’t normally use, but there’s no need for sarcasm or negativity. This is in fact a very “simple” design, and you just made a mistake because even simple things actually need to be learned before you can do them correctly.
In practice, about 99% of uses of the go
keyword are in the form go func() {}()
. Maybe we should optimize for the more common case?
I did a search of my code repo, and it was ⅔ go func() {}()
, so you’re right that it’s the common case, but it’s not the 99% case.
I agree that the article’s tone isn’t helpful. (Also, many of the things that the author finds questionable in Go can also be found in many other languages, so why pick on Go specifically?)
But could you elaborate on this?
evaluating the whole line in a new goroutine would be a race condition nightmare and no one would actually want it to work like that.
IMO this is less surprising than what Go does. The beautiful thing about “the evaluation of the whole expression is deferred” is precisely that you don’t need to remember a more complicated arbitrary rule for deciding which subexpressions are deferred (all of them are!), and you don’t need ugly tricks like wrapping the whole expression in a closure which is the applied to the empty argument list.
Go’s design makes sense in context, though. Go’s authors are culturally C programmers. In idiomatic C code, you don’t nest function calls within a single expression. Instead, you store the results of function calls into temporary variables and only then pass those variables to the next function call. Go’s design doesn’t cause problems if you don’t nest function calls.
At least they mention go vet
so even people like me without knowing it can arrive at similar conclusions. And they also mention that he is somewhat biased.
But I think they should just calmly state without ceremony like “And yet there are no compiler warnings” that this is the compiler output and this is the output of go vet
.
This also seems unnecessary:
Why we need to move it into a separate package to make that happen, or why the visibility of symbols is tied to the casing of their identifiers… your guess is as good as mine.
Subjectively, this reads as unnecessarily dismissive. There are more instances similar to this, so I get why you are annoyed. It makes their often valid criticism weaker.
I think it comes as a reaction to people valid agreeing that golang is so simple but in their (biased but true) experience it is full of little traps.
Somewhat related: What I also dislike is that they use loops for creating the tasks in golang, discuss a resulting problem and then not use loops in rust - probably to keep the code simple
All in all, it is a good article though and mostly not ranty. I think we are setting the bar for fairness pretty high. I mean we are talking about a language fan…
This also seems unnecessary: […]
Agree. The frustrating thing here is that there are cases where Rust does something not obvious, the response is “If we look at the docs, we find the rationale: …” but when Go does something that is not obvious, “your guess is as good as mine.” Doesn’t feel like a very generous take.
the author has years of Go experience. He doesn’t want to be generous, he has an axe to grind.
So where’s the relevant docs for why
we need to move it into a separate package to make that happen
or
the visibility of symbols is tied to the casing of their identifiers
we need to move it into a separate package to make that happen
This is simply not true. I’m not sure why the author claims it is.
the visibility of symbols is tied to the casing of their identifiers
This is Go fundamental knowledge.
In Go, a name is exported if it begins with a capital letter.
Why func
and not fn
? Why are declarations var type identifier
and not var identifier type
? It’s just a design decision, I think.
The information is useful but the tone is unhelpful. The difference in what’s checked/checkable and what’s not is an important difference between these platforms – as is the level of integration of the correctness guarantees are with the language definition. Although a static analysis tool for JavaScript could, theoretically, find all the bugs that rustc
does, this is not really how things play out. The article demonstrates bugs which go vet
can not find which are precluded by Rust’s language definition – that is real and substantive information.
There is more to Go than just some design decisions that make sense for a particular niche. It has a peculiar, iconoclastic design. There are Go evangelists who, much more strenuously than this author and with much less foundation, criticize JavaScript, Python, Rust, &c, as not really good for anything. The author is too keen to poke fun at the Go design and philosophy; but the examples stand on their own.
I wonder if there’s any mechanism envisioned by the lawmakers allowing us to be sure that the data that “must be deleted” actually has. Apart from assurances from the guilty parties, that is. The cynic in me expects them to say that these data cannot be distinguished from the “lawfully” collected data, and they can’t be compelled to delete all of it.
That would potentially put them in violation of https://gdpr-info.eu/recitals/no-42/ and likely into even more trouble. Mind you, the ruling already finds them in breach of https://gdpr-info.eu/art-30-gdpr/ for insufficient record-keeping.
Actually, more importantly, there is no mechanism to delete this data because there is no way that there is consent attribution that captures where the consent came from and all the downstream data that can be attributed to it. I don’t see how these companies can delete this data. Unless they just delete all user accounts (and all associated data) that came into contact with these popups.
That is a strawman. There’s a pretty clear difference between data the user entered themselves and data obtained by the system through tracking, for advertising purposes.
Of course there is an audit process in place.
Any company that works with personally identifiable information supposed to appoint a Data Protection Officer, whose responsibility is to ensure a GDPR compliance and to interface with a data protection authority on behalf of the company. This person is not liable for the GDPR breaches, but IS personally responsible for reporting organizations failures to comply with regulations (to the best of their knowledge of course).
Sure, it’s always possible to cheat the system, but the bigger the company the harder it would be to keep such conspiracy a secret.
Surprisingly, the system works pretty well it seems. Just in 2021 there have been major fines issued to, among others, the usual suspects Amazon, Google and Facebook. Sure those cases will go through the obligatory appeal process but those are (fortunately) rarely successful, since the GDPR regulations at this point are generally well understood.
Sure, it’s always possible to cheat the system, but the bigger the company the harder it would be to keep such conspiracy a secret.
While it’s true, also the bigger the company, the easier it is to have an accidental copy of some data which is not hooked up to the cleanup system. Not even out of malice.
While I expect Google and others to actually try to keep the pii secure and isolated, there’s going to be lots of other pieces about users which just end up too distributed.
Interestingly enough this is also somewhat addressed in GDPR. Data should only be in places it is actually required for business purposes and can’t be just transferred „just in case“ or something. Furthermore, locations and reasons for data processing should be documented by DPO. Not that that’s bulletproof, but it’s not naive at least.
I believe that companies actually care and try to keep the data distribution under control. But as you say it’s not bulletproof. I’ve seen silly and unexpected chains of events like: object with user data gets default string serialisation used by error reporter as part of message, which saves the info, which gets collated into a separate database for analysis. And that’s one of the more obvious problems.
We’re 3 years into GDPR, no more excuses. With that amount of money and power, any negligence is to be considered malice.
That’s not a GDPR-related excuse. It’s a reality of massive projects. In the same way we know things are not 100% reliable and work around it as needed, storage will have some exceptions where something got cached somewhere that you don’t have in your cleanup procedure.
If an automated train or plane has a critical failure and kills people, the reaction we have is not “well it’s a reality of massive projects”; the manufacturers of the involved systems will have to spend a lot of effort fixing their issues, and fixing their processes so that the same issue does not occur again. These requirements have costs (in particular they increase the cost of producing software greatly), which is commensurate to the values that we decided, as a society, to give to human lives.
GDPR is not a “critical system” in the same sense that human lives are immediately in danger, I’m not trying to say that the exact same approach should be followed. But it’s on the same scale: how much do we value privacy and user protection? Europe has ruled that it values it seriously, and has put laws in place (and enforcement procedures) to ensure that people implementing systems that deal with personal information do it very carefully, in a different way that other sort of data is handled. “We know that things are not 100% reliable” is not an excuse in itself (as it could be in a project with no correctness requirements, or very weak expectations of quality); you have to prove that you took appropriate care, commensurate with the value of the data.
If we’re raising questions, what’s with boots in seconds, don’t all OSes do?
Edit: not that this doesn’t look interesting, it’s just that that particular boast caught my eye.
There’s a great Wiki Page for FreeBSD that Colin Percival (of tarsnap fame) has been maintaining on improving FreeBSD boot time. In particular, this tells you where the time goes.
A lot of the delays come from things that are added to support new hardware or simply from the size of the code. For example, loading the kernel takes 260ms, which is a significant fraction of the 700ms that Essence takes. Apple does (did?) a trick here where did a small amount of defragmentation of the filesystem to ensure that the kernel and everything needed for boot were contiguous on the disk and so could be streamed quickly. You can also address it by making the kernel more modular and loading components on demand (e.g. with kernel modules), but that then adds latency later.
Some of the big delays (>1s) came from sleep loops that wait for things to stabilise. If you’re primarily working on your OS in a VM, or on decent hardware, then you don’t need these delays but when you start deploying on cheap commodity hardware then you discover that a lot of devices take longer to initialise than you’d expect. A bunch of these things were added in the old ISA days and so may well be much too long. Some of them are still necessary for big SCSI systems (a big hardware RAID array may take 10s of seconds to become available to the OS).
Once the kernel has loaded, there’s the init system. This is something that launchd
, SMF, and systemd
are fairly good at. In general, you want something that can build a dynamic dependency graph and launch things as their dependencies are fulfilled but you also need to avoid thundering herds (if you launch all of the services at once then you’ll often suffer more from contention than you’ll gain from parallelism).
On top of that, on *NIX platforms, there’s then the windowing system and DE. Launching X.org is fairly quick these days but things like KDE and GNOME also bundle a load of OS-like functionality. They have their own event framework and process launchers (I think systemd
might be subsuming some of this on Linux?) and so have the same problem of starting all of the running programs.
The last bit is something that macOS does very well because they cheat. The window server owns the buffer that contains every rendered window and persists this across reboot. When you log back in, it displays all of your apps’ windows in the same positions that they were, with the same contents. If then starts loading them in the background, sorted by the order in which you try to run them. Your foreground app will be started first and so the system typically has at least a few seconds of looking at that before you focus on anything else and so it can hide the latency there.
All of that said, for a desktop OS, the thing I care about the most is not boot time, it’s reboot time. How long does it take between shutting down and being back in exact same state in all of my apps that I was in before the reboot? If I need a security update in the kernel or a library that’s linked by everything, then I want to store all state (including window positions and my position within all open documents, apply the update, shut down, restart, reload all of the state, and continue working. Somewhat related, if the system crashes, how long does it take me to resume from my previous state? Most modern macOS apps are constantly saving restore points to disk and so if my Mac crashes then it typically takes under a minute to get back to where I was before the reboot. This means I don’t mind installing security updates and I’m much more tolerant of crashes than on any other system (which isn’t a great incentive for Apple’s CoreOS team).
And essence basically skips all of that cruft? Again, not to be putting the project down, but all that for a few seconds doesn’t seem much, once a week.
I don’t think I reboot my Linux boxes more often, and even my work Windows sometimes reminds me that I must reboot once a week because if company policy.
Maybe if I had an old slow laptop it would matter to me more. Or of i was doing something with low-power devices (but then, I would probably be using something more specialised there, if that was important).
Again. Impressive feat. And good work and I hope they make something out of it (in the long run, I mean). But doesn’t Google also work on fuchsia, and Apple on macos? They probably have much more chance to become new desktop leaders. I don’t know, this seems nice but I think there biggest benefit is in what the authors will learn from the project and apply elsewhere.
And essence basically skips all of that cruft? Again, not to be putting the project down, but all that for a few seconds doesn’t seem much, once a week.
It probably benefits from both being small (which it gets for free by being new) and from not having been tested much on the kind of awkward hardware that requires annoying spin loops. Whether they can maintain this is somewhat open but it’s almost certainly easier to design a system for new hardware that boots faster than it is to design a system for early ’90s hardware, refactor it periodically for 30 years, and have it booting quickly.
But doesn’t Google also work on fuchsia, and Apple on macos? They probably have much more chance to become new desktop leaders. I don’t know, this seems nice but I think there biggest benefit is in what the authors will learn from the project and apply elsewhere.
I haven’t paid attention to what Fuchsia does for userspace frameworks (other than to notice that Flutter exists). Apple spent a lot of effort on making this kind of thing fast but most of it isn’t really to do with the kernel. Sudden Termination came from iOS but is now part of macOS. At the OS level, apps enter a state where they have no unsaved state and the kernel will kill them (equivalent of kill -9
) whenever it wants to free up memory. The WindowServer keeps their window state around so that they can be restored in the background. This mechanism was originally so iOS could kill background apps instead of swapping but it turns out to be generally useful. The OS parts are fairly simple, extending Cocoa so that it’s easy to write apps that respect this rule was a lot more difficult work.
In the demo video, it booted in 0.7s, which, to me, is impressive. Starting applications and everything is very snappy too. The wording of the claim doesn’t do it justice though, I agree with that.
Ideally you should almost never have to reboot an OS, so boot time doesn’t interest me nearly as much as good power management (sleep/wake).
It’s not never, but I basically only reboot my macs and i(pad)os devices for OS updates, which is a handful of times per year. The update itself takes long enough that the reboot time part of it is irrelevant - I go something else while the update is running.
I think it’s only really Windows that gets rebooted. I used to run Linux and OpenBSD without reboots for years sometimes, and like you I only reboot MacOS when I accidentally run out of laptop battery or do an OS update, as you say.
I dunno; how many people own Apple devices? I pretty much only reboot my Macs for OS updates, or the rare times I have to install a driver. My iOS devices only reboot for updates or if I accidentally let the battery run all the way down.
I didn’t think this was a controversial statement, honestly. Haven’t Windows and Linux figured out power management by now too?
pretty much only reboot my Macs for OS updates, or the rare times I have to install a driver
That’s not “never”, or are MacOS updates really so far/few between?
I feel like this is one of those things where people are still hung up from the days of slow HDDs and older versions of Windows bloated with all kinds of software on startup.
It depends a bit on the use case. For client devices, I agree, boot time doesn’t matter nearly as much as resume speed and application relaunch speed. For cloud deployments, it can matter a lot. If you’re spinning up a VM instance for each CI job, for example, then anything over a second or two starts to be noticeable in your total CI latency.
Only if it can somehow store the entire working state, including unsaved changes, and restore it on boot. Since that usually involves relaunching a bunch of apps, it takes significantly longer than a simple boot-to-login-screen.
This isn’t theoretical. Don’t you have any devices that sleep/wake reliably and quickly? It’s profoundly better than having to shut down and reboot.
Only if it can somehow store the entire working state, including unsaved changes, and restore it on boot
That’s another interesting piece of the design space. I’ve seen research prototypes on Linux and FreeBSD (I think the Linux version maybe got merged?) that extend the core dump functionality to provide a complete dump of memory and associated kernel state (open file descriptors). Equivalent mechanisms have been around in hypervisors for ages because they’re required for suspend / resume and migration. They’re much easier in a hypervisor because they interfaces for guests have a lot less state: a block device has in-flight transactions, a network device has in-flight packets, and all other state (e.g. TCP/IP protocol state, file offsets) is stored in the guest. For POSIXy systems, various things are increasingly difficult:
If you have this mechanism and you have fast reboot, then you don’t necessarily need OS sleep states. If you also have a sudden termination mechanism then you can use this as fallback for apps that aren’t in sudden-termination state.
Of course, it depends a bit on why people are rebooting. Most of the time I reboot, it’s to install a security update. This is more likely to be in a userspace library than the kernel. As a user, the thing I care most about is how quickly I can restart my applications and have them resume in the same state. If the kernel / windowing system can restart in <1s that’s fine, but if my apps lose all of their state across a restart then it’s annoying. Apple has done a phenomenal amount of work over the last decade to make losing state across app restarts unusual (including work in the frameworks to make it unusual for third-party apps).
All my devices can sleep/wake fine, but I almost never use it. My common apps all auto start on boot and with my SSDs I boot in a second or two (so same as time to come out of sleep honestly, in both cases the slowest part is typing my password).
On my current laptop, it wakes up instantly when I open the lid, enter the password, and the state is as exactly as I left it. (And it’s probably less gigabytes written to disk than hibernation either.)
Re: https://github.com/github/markup/issues/533
I’m the main author of KeenWrite (see screenshots), a type of desktop Markdown editor that supports diagrams. It’s encouraging to see that Mermaid diagrams are being supported in GitHub. There are a few drawbacks on the syntax and implications of using MermaidJS.
First, only browser-based SVG renderers can correctly parse Mermaid diagrams. I’ve tested Apache Batik, svgSalamander, resvg, rsvg-convert, svglib, CairoSVG, ConTeXt, and QtSVG. See issue 2485. This implies that typesetting Mermaid diagrams is not currently possible. In effect, by including Mermaid diagrams, many documents will be restricted to web-based output, excluding the possibility of producing PDF documents based on GitHub markdown documents (for the foreseeable future).
Second, there are numerous text-to-diagram facilities available beyond Mermaid. The server at https://kroki.io/ supports Mermaid, PlantUML, Graphviz, byte fields, and many more. While including MermaidJS is a great step forward, supporting Kroki diagrams would allow a much greater variety. (Most diagrams produced in MermaidJS can also be crafted in Graphviz, albeit with less terse syntax.)
Third, see the CommonMark discussion thread referring to a syntax for diagrams. It’s unfortunate that a standard “namespace” concept was not proposed.
Fourth, KeenWrite integrates Kroki. To do so, it uses a variation on the syntax:
``` diagram-mermaid
```
``` diagram-graphviz
```
``` diagram-plantuml
```
The diagram-
prefix tells KeenWrite that the content is a diagram. The prefix is necessary to allow using any diagram supported by a Kroki server without having to hard-code the supported diagram type within KeenWrite. Otherwise, there is no simple way to allow a user to mark up a code block with their own text style that may coincide with an existing diagram type name.
Fifth, if ever someone wants to invent a programming language named Mermaid (see MeLa), then it precludes the possibility of using the following de facto syntax highlighting:
``` mermaid
```
My feature request is to add support for Kroki and the diagram-
prefix syntax. That is:
``` diagram-mermaid
```
And deprecate the following syntax:
``` mermaid
```
And, later, introduce the language-
prefix for defining code blocks that highlight syntax. That is, further deprecate:
``` java
```
With the following:
``` language-java
```
That would provide a “namespace” of sorts to avoid naming conflicts in the future.
I don’t think moving the existing stuff to language-
is necessary, however I agree that diagram-mermaid
is a better option – especially if one wants syntax highlighting for the syntax of the Mermaid diagramming language, to describe how to write such diagrams.
First, only browser-based SVG renderers can correctly parse Mermaid diagrams. I’ve tested Apache Batik, svgSalamander, resvg, rsvg-convert, svglib, CairoSVG, ConTeXt, and QtSVG. See issue 2485
Do you mean the output of mermaid.js? Besides that these SVG parsers should be fixed if they are broken and maybe mermaid.js could get a workaround, surely a typset system could read the mermaid syntax directly and not the output of a for-web implementation of it?
If you look at the issue, there’s a fairly extensive list of renderers affected. This suggests that the core problem is that mermaid uses some feature(s) which are not widely supported.
Besides that these SVG parsers should be fixed if they are broken
Not sure if they are broken per se. The EchoSVG project aims to support custom properties, which would give it the ability to render Mermaid diagrams. From that thread, you can see supporting SVG diagrams that use custom properties is no small effort. Multiply that effort by all the renderers listed and we’re probably looking at around ten years’ worth of developer hours.
surely a typset system could read the mermaid syntax directly and not the output of a for-web implementation of it
Yes. It’s not for free, though. Graphviz generates graphs on par with the complexity of Mermaid graphs and its output can be rendered with all software libraries I tried. IMO, changing Mermaid to avoid custom properties would take far less effort than developing custom property renderers at least eight times over.
IMO, changing Mermaid to avoid custom properties would take far less effort than developing custom property renderers at least eight times over.
Sure, but as I said the ideal would be neither of those, but to just typeset the mermaid syntax directly and not rely on the JS or the SVG at all.
Interesting, what do you mean by this is a better compromise for scripts? I’m not sure I see where this would be much different in that context.
I’m working on a deployment tool https://deployer.org/ and for example, if you want to use git and clone repo for the first time (from example from CI) you need to manually login into the server and run the ssh command to github.com to update knopwn_hosts.
With accept-new this workflow is automated and no manual setup is needed.
I imagine it’ll be better for scripts that issue multiple SSH commands. You can verify the remote end hasn’t changed host keys between the two (or more) invocations of SSH; whereas with no
you just accept whatever the host key is whether it changes or not.
You can’t tell if the host changes between script runs but you can be sure the host hasn’t changed during the current run.
I solve this in CI by putting the host’s fingerprint in a variable and writing that to known_hosts. I would think the odds of a host key changing in between commands of a job would be tiny, and the damage could already be done.
It’s still “trust on first use”, but that first use is when I set up CI and set the variable, not at the start of every job.
I think this is the correct way to do it, I do this as well for CI jobs SSH-ing to longer-lived systems.
If the thing I’m SSHing into is ephemeral, I’ll make it upload its ssh host public keys to an object storage bucket when it boots via its cloud-init or “Userdata” script. That way the CI job can simply look up the appropriate host keys in the object storage bucket.
IMO any sort of system that creates and destroys servers regularly, like virtual machines or VPSes, should make it easy to query or grab the machine’s ssh public keys over something like HTTPS, like my object storage bucket solution.
I guess this is a sort of pet peeve of mine. I was always bugged by the way that Terraform’s remote-exec provisioner turns off host key checking by default, and doesn’t warn the user about that. I told them this is a security issue and they told me to buzz off. Ugh. I know its a bit pedantic, but I always want to make sure I have the correct host key before I connect!!! Similar to TLS, the entire security model of the connection can fall apart if the host key is not known to be authentic.
Unless you’re clearing the known_hosts
file (and if so, WTF), I don’t see why there would be a difference between consecutive connections within a script and consecutive connections between script runs.
Jobs/tasks running under CI pipelines often don’t start with a populated known_hosts
. Ephemeral containers too. Knowing you’re still talking to the same remote end (or someone with control of the same private key at least) is better than just accepting any remote candidate in that case.
Less “clearing known_hosts
” file, more “starting without a known_hosts
” file.
I’m having trouble understanding what the AMD SMU is. From the context I guess it is somewhat like the Intel ME? Though AMD PSP is the direct equivalent for that. I am confused.
Looks like the SMU is more of a power/thermal management controller, according to https://fuse.wikichip.org/news/1177/amds-zen-cpu-complex-cache-and-smu/2/