I used to live in an apartment with no ground (and where the landlord apparently thought this was fine, despite it being a code violation). I wish I’d seen this story then so I could use this to tell her “look, do you want to be liable for damaged hardware”?
I’m going to stand up for Twitter threads here. I think they’re genuinely one of the most interesting things about Twitter as a writing medium, and it’s worth talking about what makes them so useful.
That thread is a particularly interesting example, I think. It’s about the move “Into the Spider-Verse”. It turns out many of the artists who worked on that movie use Twitter, and were sharing little snippets of behind-the-scenes reference material and storyboards and suchlike - so I ran a thread that tied those things together.
As the critical and fan reception to that movie increased (and the Oscar campaign got underway) more and more interesting material surfaced. I got to document that in a thread, for an investment of probably less than 10 minutes a day over several months.
I don’t know of any other publishing medium that would support that ROI in terms of effort to value of content produced.
I did the same thing for the Mitchells vs the Machines (many of the same artists) and the director of the movie interacted positively with the thread a few times, which made me very happy! https://twitter.com/simonw/status/1487673496977113088
Beyond multi-month collation threads though, I really like the writing style that threads encourage. They make people break their writing up into 280 character chunks, which I think makes most people better writers. They make it easy to embed images, and link to other tweets. I think the result is often a much more consumable piece than a regular blog post.
It’s a powerful engagement hack too: Tweet once and there’s a good chance many of your followers might miss it. Run a thread over the course of a day and each time you add to it there’s another chance for people to see it.
I HATE that Twitter make people log in to view them though. That’s the one thing that most discourages me from using them (I write a lot on my own blog too).
This is a very interesting example indeed. You’re basically using Twitter as a pinterest board :) The examples I pointed to, though, are using threads because users want to write big chunks of text but are limited by the 280 characters limit. My point was that if you’re going to write a long post, it’s probably better to write it somewhere where it’s allowed, and maybe link to it in a tweet so that people who don’t use RSS can still know you’ve posted it.
I don’t know of any other publishing medium that would support that ROI in terms of effort to value of content produced.
That’s because all of the content you linked is on Twitter. If you had to take screenshots or select the text you wanted to share from other platforms, then add a link to the original article for reference, etc. it would take you much longer than this. (I assume. I don’t have a Twitter account and am not familiar with the intricacies of this platform)
I HATE that Twitter make people log in to view them though. That’s the one thing that most discourages me from using them
I’m afraid it’s the trend these days. The worst being by far Instagram. Without an account, you cannot access anything. I’m honestly surprised YouTube is not requiring a Google Account to read the videos at this point, but maybe their business model of injecting ads inside videos proved good enough not to do that?
(I write a lot on my own blog too).
And we all are grateful for that! Your blog is fantastic!
I’m honestly surprised YouTube is not requiring a Google Account to read the videos at this point, but maybe their business model of injecting ads inside videos proved good enough not to do that?
Thought: If a user doesn’t interact, YouTube still has their primary content. Where as for Twitter the interactions are the content
You’re basically using Twitter as a pinterest board
I use a personal Discord server for that. I couldn’t possibly relate /s
They make people break their writing up into 280 character chunks, which I think makes most people better writers.
I feel that the older 140 character limit was better at teaching people to write concisely and clearly, but also so painfully austere that it was not a net win for communication.
I seem to remember seeing a graph they did when they were first rolling out 280 that showed that people would very regularly hit 140 or close to it, but hitting 280 was much rarer.
I HATE that Twitter make people log in to view them though. That’s the one thing that most discourages me from >using them (I write a lot on my own blog too).
I would recommend letting your opponent pick your salt. Otherwise (especially if you do actually use md5, though I suspect it’s just a convenient example) you can just generate a bunch of plays that have the same resulting hash ahead of time, and use them when convenient.
like carl said, this allows your opponent to compute all possible hashes for your own play ahead of time. preimage attacks are definitely a problem that was overlooked in the current revision – a future revision will address this point.
If the opponent picks the salt, can’t they just see what the hashes are for salt+“rock” salt+“paper” and salt+“scissors”? I think you just have to use a hashing algo without known collisions.
just use two salt values, one from each player, and concatenate them along with the play/move.
also, it seems redundant for the second player to also go through the hashing rigmarole. they can reveal their move immediately, once the first player has locked in their secret answer. so a game would go like this:
<Alice> salt1=ddda0a56c5b73a4c87956d9bdb7df94e
*bob generates salt2 and computes md5(salt1|salt2|move)*
<Bob> hash=4510abdd3eb615ce12df0ec848247385
<Alice> paper
<Bob> ddda0a56c5b73a4c87956d9bdb7df94e-747de43a6b2db08519ec0e56f04f4ead-rock
If you are trying to match a particular hash and completely control the source, you can make a lookup table from hash to source, and efficiently search it for something that will to produce the hash you want. They’re called rainbow tables. A salt removes this attack because you don’t get to control the entire input, so you basically would need one copy of the table for each possible salt.
I had been running the site on a fly.io instance in a single region, moved it to netlify to avoid these instability issues (which other people mentioned as well)
My guess is that in the source code it’s implemented in HTTP status code order (since that’s a fairly reasonable choice), and the autogenerated documentation doesn’t sort the method names. I think this is more a bug with the documentation generator.
It’s interesting to me that preserving the seed and adding/removing tags will still generate a similar image. I would have expected that something would cause the RNG to “diverge” at some point (calling .next() a different number of times).
I guess it’d make sense if the seed is only used to generate the initial noise image that diffusion iterates on, and from there the process is entirely deterministic. Maybe that’s how it works.
Yes. Only the initial noise is random, the sampling / denoising process is entirely deterministic. (I ported Stable Diffusion to Swift: https://github.com/liuliu/swift-diffusion).
Wow. This is such a small comment with huge implications. Given the new apple silicon architecture, this is going to be really impressive once you get to packaging this up into an app. I haven’t built it myself (yet), but I’m thoroughly impressed that you ported it!
Thanks! This is not that big of a deal ATM. Some people already ported the model using PyTorch -> CoreML conversion tools to potentially run in an app. However, I do believe my approach would be better for memory usage as well as new features that requires training (Textual Inversion / Dreambooth).
This might be a viable approach if you’re mainly doing trigonometry functions. But if you’re maxing that with calculus, I think the costs will outweigh the benefit.
Reminds me a bit of Norman Wildberger’s trigonometry.
I think if you are strong enough at math to be able to handle the wide-ranging consequences of reimagining the foundations, you probably don’t need the benefit, and if you aren’t, then you probably don’t want to put yourself in a position where there are only one or two books you can use. These iconoclasts are right that you can reconceptualize everything if you want to put in the work. But the proselytization of it I question.
Does such derivatives actually come up in game engine code? How often?
Remember that Casey Muratori was talking specifically about code, most notably game engines. It’s not about reimagining all of maths. It’s about reimagining a little part of game engine code. Something that some libraries have already done, since apparently half turns are already a thing in some trigonometric libraries.
That’s why fernplus specifically said if you’re mixing it with calculus.
Ah, my bad.
I’ve seen in HN a comment explaining that using radians is real helpful for symbolic manipulation, most notably because it makes sin() and cos() derivative of each other (just like exponential is a derivative of itself). That same comment however noted that it didn’t help one bit with most numerical applications implemented in computer programs.
It comes up anywhere that dot products are used, because dot products can be interpreted as cosines. First and second derivatives are used when tracing/casting rays, in particular.
For anyone else who struggled to confirm that in their heads, Wolfram Alpha agrees that d/dt (sin(2 πt)) = 2 π cos(2 πt), which is equivalent to 2 π mycos(t).
And if you’re a game designer trig functions are almost certainly most of what you’re actually doing, because that’s what you need to do to calculate what polygons appear where.
Finishing touches on an idle/incremental game, and then working on my next project. The eglot Emacs LSP library looks cleaner than LSP.el, but it only supports one server per file. So I’m going to try to write a “server” that basically muxes a bunch of other servers together.
Honestly I’m of the opinion that git’s underlying data model is actually pretty solid; it’s just the user interface that’s dogshit. Luckily that’s the easiest part to replace, and it doesn’t have any of the unfortunate network effect problems of changing systems altogether.
I’ve been using magit for a decade and a half; if magit (or any other alternate git frontends) had never existed, I would have dumped git ages ago, but … you don’t have to use the awful parts?
Honestly I’m of the opinion that git’s underlying data model is actually pretty solid; it’s just the user interface that’s dogshit.
For what it’s worth, I do disagree, but not in a way relevant to this article. If we’re going to discuss Git’s data model, I’d love to discuss its inability to meaningfully track rebased/edited commits, the fact that heads are not version tracked in any meaningful capacity (yeah, you’ve got the reflog locally, but that’s it), that the data formats were standardized at once too early and too late (meaning that Git’s still struggling to improve its performance on the one hand, and that tools that work with Git have to constantly handle “invalid” repositories on the other), etc. But I absolutely, unquestionably agree that Git’s UI is the first 90% of the problem with Git—and I even agree that magit fixes a lot of those issues.
I’ve come to the conclusion that there’s something wrong with the data model in the sense that any practical use of Git with a team requires linearization of commit history to keep what’s changing when straight. I think a better data model would be able to keep track of the history branches and rebases. A squash or rebase should include some metadata that lets you get back the state before the rebase. In theory, you could just do a merge, but no one does that at scale because they make it too messy to keep track of what changed when.
I don’t think that’s a data model problem. It’s a human problem. Git can store a branching history just fine. It’s just much easier for people to read a linearized list of changes and operate on diffs on a single axis.
Kind of semantic debate whether the problem is the data model per se or not, but the thing I want Git to do—show me a linear rebased history by default but have the ability to also show me the pre-flattened history and the branch names(!) involved—can’t be done by using Git as it is. In theory you could build what I want using Git as the engine and a new UI layer on top, but it wouldn’t be interoperable with other people’s use of Git.
It already has a distinction between git log, git log --graph and git log --decorate (if you don’t delete branches that you care about seeing). And yeah, you can add other UIs on top.
BTW: I never ever want my branch names immortalized in the history. I saw Mercurial do this, and that was the last time I’ve ever used it. IMHO people confuse having record of changes and ability to roll them back precisely with indiscriminately recording how the sausage has been made. These are close, but not the same.
git merge –no-ff (imo the only correct merge for more than a single commit) does use the branch name, but the message is editable if your branch had a useless name
They’re not supposed to! Squashing and amending are important tools for cleaning up unwanted history. This is a very important ability, because it allows committing often, even before each change is final, and then fixing it up into readable changes rather than “wip”, “wip”, “oops, typo”, “final”, “final 2”.
What I’m saying is, I want Git for Git. I want the ability to get back history that Git gives me for files, for Git itself. Git instead lets you either have one messy history (with a bunch of octopus merges) or one clean history (with rebase/linearization). But I want a clean history that I can see the history of and find out about octopuses (octopi?) behind it.
No. The user interface is one of the best parts of Git, in that it reflects the internals quite transparently. The fundamental storage doesn’t model how people work: Git reasons entirely in terms of commits/snapshots, yet any view of these is 100% of the time presented as diffs.
Git will never allow you to cherry-pick meaningfully, and you’ll always need dirty hacks like rerere to re-solve already-solved conflicts. Not because of porcelain (that would have been solved ten years ago), but because snapshots aren’t the right model for that particular problem.
Hm, it seems to me that assertion discussion which doesn’t touch on ideas from https://www.sqlite.org/assert.html is misleadingly incomplete.
In large, non-batch programs which change over time, some assertions are bound to fire in production after subtle refactors. With the today’s typical assertion machinery, this leads to people either not using assertion at all out of caution, or disabling them for prod builds (which in turn means that asserts are what the programmer is wishing to be true, not something which actually is true empirically).
The SQLite idea that you want assertions that abort in testing, but recover via short-circuiting (and login, if appropriate) in production deserves to be better known, and better enshrined in standard libraries.
The idea of using testcase(cond) for test coverage is great.
But I’m not sure about utility of short-circuiting if never(cond). If your assumption that something never happens is violated, that must be a serious issue. It smells like a recipe for having completely untested “error handling” paths that will cause cascading damage if they ever run.
If your assumption that something never happens is violated, that must be a serious issue.
Not necessary. Big programs usually have a boatload of features, the majority of those are not essential/critical for operation, and there’s usually some kind of a good recovery boundary anyway. Every large reliable system is Erlang in the limit :-)
It is true that the bail-out paths would be untested by definition, so you always want them to be essentially return None; or some such.
I think the under-articulated point is that “cascading damage” is often not how software works: you don’t have a giant, intricate clockwork where every detail depends exactly on every other detail, we are simply not good enough to make those kinds of architectures work. The reality is usually more like a Linux kernel, which has a relatively narrow core to it, and a massive amounts of drivers of varying code quality.
Asserts are useful in situations where the assert failing means the program as a whole is deeply, irrecoverably messed up, and there’s nothing you can do but die gracefully
Failures in non-core components should use exceptions/optional types/whatever your language mechanism for recoverable failure is, and the core system should be able to reset those components to a known good state even from an ‘impossible’ bad one
I think this makes sense, but it doesn’t really address library code. A data structure library can’t possibly know whether or not it’s core to the program it winds up running in; should it use asserts or recoverable failure?
Yeah, that’s a good summary! Couple of more details:
asserts are also ok if the reasoning for them is purely local (“developer has a proof” from SQLite docs)
sometimes (Erlang) assert is a perfectly fine recoverable failure
For libraries, situation is trickier. If you do data structures, you have a chance to exhaustively (property based testing/fuzzing) test them, and usually data structures are small, so this case probably falls into “purely local assert” category. But this also is an easy case
A case which helped me to conceptualize the library issue recently is the SVG rendering library. It could have a simple interface like this:
But internally rendering SVG I would imagine is quite hard and fiddly and is likely to have some latent bugs somewhere. But also, as a user of this library, I also obviously don’t want some bogus SVG to bring my whole process down. So I think we do want to ensure(ideally, statically) that this library doesn’t have panicking paths anywhere.
LLVM has suffered from this a lot over the years. LLVM is intended to be usable as a library, but most of the consumers are programs that take a single trusted input, produce output, and then exit. In a lot of places, LLVM uses asserts rather than reporting errors for invalid inputs. This is absolutely fine for, say, clang: if clang generates invalid LLVM IR (or uses the LLVM support APIs incorrectly) then there’s a clang bug and providing a stack trace and asking users to file a bug report is probably the simplest thing to do. But then you try using LLVM somewhere like a WebGL stack and now you’re forced to accept input that is written by malicious actors specifically to try to exploit bugs in the compiler and if you crash then you may take down the entire browser renderer process with you. If you build without asserts, then the code doesn’t have proper error handling and so the attacker can get it into an invalid state. There’s been a lot of work over the years to try to introduce better error handling.
I was hoping to see one that was easily extensible in Rust, both interactively, and via plugins and config files. But I guess that’s a big ask for a language like Rust.
For me, that’s the killer feature that keeps me on Emacs.
Yeah, I think doing interactive Rust would be hard. If I was going to make something as customizable as emacs, I’d embed a scripting language. Maybe JS, maybe Lua.
I don’t really know it well enough to speak to it. I’m open to running through a tutorial but I just never worked at a place where its use was widespread enough to justify sitting down and learning it. If there’s a good tutorial though hit me up on Twitter and I’m glad to run through it.
No tutorial, but I can tell you my experience.
With a few lines of configurations to enable rust-mode and eglot (the Emacs client that speaks to rust-analyzer), I have a very comfortable editing experience. I have the usual syntax highlighting, auto-indentation (plus cargo fmt on save if you want), jump-to-def, find-reference, etc. There’s limited refactoring support with a rename functionality, that I use maybe once a week. The only thing that’s annoying, but I’m not sure it’s an Emacs thing, is that switching from one branch to another causes rustc to recompile a bunch of things and the computer becomes slow for a while, with all the CPUs maxed out.
I think LSP has done wonders for emacs, personally, Doing language analysis in the editor was a bad idea. I think tree-sitter is the logical next step of this: the editor should know how to display highlighted text, but it shouldn’t have to know the highlighting rules.
This is the most vague and ‘vibe-based’ of them all, and is the least important. But I like it when software has a name that has ‘kawaii’ in the acronym or is named after a fictional character or has a name that’s just plain fun. Obsidian also falls in this category; even though I can’t think of how the name relates to note-taking, it’s just a nice name for software.
If I made high quality public tools I would absolutely give them the most embarrassing names possible. Vicious Pumpkin. Daktaklakpak. “Tau is the correct circle constant”. ZZZ, but the first Z must be the American Z and the other two must be the British Zed. This pronunciation will be in the official standard.
Back when I was hopping between different projects a lot, Lando was something I used every day. I don’t know what its governance structure is and the keyboard interaction point isn’t really applicable (it’s container configuration software) but for me it checks the rest of the boxes. It made the common cases for reproducible dev environments ridiculously easy, and it naturally supports going from simple to arbitrarily complex environments as your project evolves. Plus the name rocks.
I feel like another important aspect, related to but distinct from good docs, is a healthy, supportive community. Lando’s lead author, Mike, is friendly, hilarious, and very committed to having fun with his software. The community around it reflects that.
I feel like another important aspect, related to but distinct from good docs, is a healthy, supportive community.
Yes, absolutely. If the author of a piece of software is a jerk I’m much less enthused about using it. It won’t necessarily stop me (one of the core parts of my setup environment is written by someone I strongly dislike), but it’ll grate on me.
Yep. There’s a programming language (that will go unnamed here) I refuse to use despite its often-impressive technical merits due to (1) its instability and quirks often requiring me to head to IRC, the issue tracker, whatever, and (2) rarely seeing positive interactions from the maintainers of that language in such encounters (ones I’m involved in, or ones others are involved in). It’s a real thing.
Not being a jerk is free. Maybe sometimes challenging (don’t we all want to go bananas on someone from time to time), but free. My time, however, is not, and my time is worth too much to deal with toxic communities.
Classical linear logic is not much different. We’ll skip the “exponential modalities”; this means that we must use every one of our premises exactly once. The axioms are different, too, because linear logic has two different kinds each of conjunction and disjunction (four connectives total.) The article’s trick for changing implication into disjunction would still work, but it would always give one kind of disjunction, called the “multiplicative” disjunction. (The other kind is called “additive”.)
We won’t have Boolean algebra, but C* algebra. It’s no longer about what is true. I don’t know if you’re familiar with the vending-machine interpretation of linear logic, which might make it easier to understand exactly what is being computed. A premise is not a formula for a hypothetical truth, but a recipe for a hypothetical physical resource which cannot be copied or dropped: a molecule, a can of soda, a shipping container, a region of memory, ingredients in a cooking recipe, etc. The connectives could be interpreted as follows:
Additive conjunction (“with”): you have two recipes for two resources, and just enough ingredients to use exactly one recipe; you choose which one to use
Multiplicative conjunction (“and”): you have two resources and must use both of them
Additive disjunction (“plus”): you have either of two resources, and must have two recipes (one for each case) before you can try either case; you don’t choose which one to use
Multiplicative disjunction (“par”): you have either of two resources, and you must prove that one case is impossible before you can obtain the resource directly from the other case
That last one is weird, but it’s connected to implication: having a resource is like the negation of needing a resource, so an alternative way to look at multiplicative disjunction is that you have a recipe which needs a resource, and using the recipe will allow you to have a resource; you have a recipe which transforms one resource into another resource. This is “linear implication”.
The disjunctive cases remind me a bit of algebraic datatypes. Additive disjunction corresponds to destructing a datatype by handling all possible constructors; multiplicative disjunction corresponds to pulling out the parameters to a constructor by proving that that’s the constructor for the value you actually have.
Swift elegantly disambiguates the two at the syntax level, by requiring a leading . for enum variants. Rust just hacks this at the name-resolution layer, by defaulting to a new binding unless there’s a matching constant in the scope.
I really like Swift’s solution. Can’t Rust just adopt it in a future edition?
My understanding is that adding a new feature is not a breaking change. I think .id is currently invalid syntax, so making it valid wouldn’t be breaking.
Match ergonomics was subtle. It left the old syntax valid, so there were no source code changes required, and it didn’t add any new tokens/sigils/keywords, so anyone who already knew the hard way could figure out the easy way, or just ignore it and keep writing “unergonomic” matches.
Huh, I hadn’t thought about units as quotients. That’s neat.
My personal thoughts on units: any quantity is a combination of both a number representing the magnitude and a representation of the ‘exponent’ of each base unit, called their dimensions. So for example, 2.5 kg*m^2/s^2 has a magnitude of 2.5 and dimensions of [mass: 1, length: 2, time: -2]. “Raw” numbers just have all their dimensions set to 0.
You can apply a function f to an arbitrary quantity with dimensions only if it’s homogenous: there’s some k such that for all numbers a, f(ax) = a^k*f(x). Otherwise you’re not scale invariant: sin(1 m) vs sin(100 cm) vs sin(1 cm), that sort of thing.
This is why you can multiply unitary quantities but not add them: multiplication is linear, but addition isn’t (since 2x + y != 2(x + y)).
This reminds me of how you can have a chess engine analyze your game after the fact to go “this was a good move”, “this was a mistake”, and so on.
Of course, the big difference is that the chess engine is guaranteed 100% to be better than you, which is not (yet) true for AI writing software.
I used to live in an apartment with no ground (and where the landlord apparently thought this was fine, despite it being a code violation). I wish I’d seen this story then so I could use this to tell her “look, do you want to be liable for damaged hardware”?
I’m going to stand up for Twitter threads here. I think they’re genuinely one of the most interesting things about Twitter as a writing medium, and it’s worth talking about what makes them so useful.
I write a lot of Twitter threads. I kept adding to my longest one for just over a year! https://twitter.com/simonw/status/1077737871602110466
That thread is a particularly interesting example, I think. It’s about the move “Into the Spider-Verse”. It turns out many of the artists who worked on that movie use Twitter, and were sharing little snippets of behind-the-scenes reference material and storyboards and suchlike - so I ran a thread that tied those things together.
As the critical and fan reception to that movie increased (and the Oscar campaign got underway) more and more interesting material surfaced. I got to document that in a thread, for an investment of probably less than 10 minutes a day over several months.
I don’t know of any other publishing medium that would support that ROI in terms of effort to value of content produced.
I did the same thing for the Mitchells vs the Machines (many of the same artists) and the director of the movie interacted positively with the thread a few times, which made me very happy! https://twitter.com/simonw/status/1487673496977113088
Beyond multi-month collation threads though, I really like the writing style that threads encourage. They make people break their writing up into 280 character chunks, which I think makes most people better writers. They make it easy to embed images, and link to other tweets. I think the result is often a much more consumable piece than a regular blog post.
It’s a powerful engagement hack too: Tweet once and there’s a good chance many of your followers might miss it. Run a thread over the course of a day and each time you add to it there’s another chance for people to see it.
I HATE that Twitter make people log in to view them though. That’s the one thing that most discourages me from using them (I write a lot on my own blog too).
Thanks for this feedback!
This is a very interesting example indeed. You’re basically using Twitter as a pinterest board :) The examples I pointed to, though, are using threads because users want to write big chunks of text but are limited by the 280 characters limit. My point was that if you’re going to write a long post, it’s probably better to write it somewhere where it’s allowed, and maybe link to it in a tweet so that people who don’t use RSS can still know you’ve posted it.
That’s because all of the content you linked is on Twitter. If you had to take screenshots or select the text you wanted to share from other platforms, then add a link to the original article for reference, etc. it would take you much longer than this. (I assume. I don’t have a Twitter account and am not familiar with the intricacies of this platform)
I’m afraid it’s the trend these days. The worst being by far Instagram. Without an account, you cannot access anything. I’m honestly surprised YouTube is not requiring a Google Account to read the videos at this point, but maybe their business model of injecting ads inside videos proved good enough not to do that?
And we all are grateful for that! Your blog is fantastic!
Thought: If a user doesn’t interact, YouTube still has their primary content. Where as for Twitter the interactions are the content
I use a personal Discord server for that. I couldn’t possibly relate /s
I feel that the older 140 character limit was better at teaching people to write concisely and clearly, but also so painfully austere that it was not a net win for communication.
I seem to remember seeing a graph they did when they were first rolling out 280 that showed that people would very regularly hit 140 or close to it, but hitting 280 was much rarer.
I remember staying within the 140 for a long time after they rolled out the 280. No idea why other than perhaps just “I will not use 280!”
There’s always Thread Reader though, right? https://threadreaderapp.com/thread/1077737871602110466.html
Which helpfully is
!thread
on DuckDuckGo too.I would recommend letting your opponent pick your salt. Otherwise (especially if you do actually use md5, though I suspect it’s just a convenient example) you can just generate a bunch of plays that have the same resulting hash ahead of time, and use them when convenient.
like carl said, this allows your opponent to compute all possible hashes for your own play ahead of time. preimage attacks are definitely a problem that was overlooked in the current revision – a future revision will address this point.
what if your opponent gives you a salt and then you add an arbitrary number where mod3 it maps onto RPS, then the opponent can’t test all values.
Of course that is overcomplicated since the mod3 trick means you don’t need a salt, you can just commit to a hash immediately.
Maybe you and your opponent both choose part of the salt and it gets combined?
If the opponent picks the salt, can’t they just see what the hashes are for salt+“rock” salt+“paper” and salt+“scissors”? I think you just have to use a hashing algo without known collisions.
just use two salt values, one from each player, and concatenate them along with the play/move.
also, it seems redundant for the second player to also go through the hashing rigmarole. they can reveal their move immediately, once the first player has locked in their secret answer. so a game would go like this:
Right, yes, fair, you probably want something closer to diffie-hellmann where each party contributes part of the input being hashed
If you have a secure hash function, this should be impossible.
The more relevant weakness here is the attack carlmjohnson mentioned.
If you are trying to match a particular hash and completely control the source, you can make a lookup table from hash to source, and efficiently search it for something that will to produce the hash you want. They’re called rainbow tables. A salt removes this attack because you don’t get to control the entire input, so you basically would need one copy of the table for each possible salt.
Link isn’t loading for me.
It seems to load fine for me now, are you still seeing an issue? If so, would you mind sharing what exactly it says is wrong, 404 or the like?
I can see a spike in 404s in my monitoring a few minutes ago, but I’m not sure why, as this link has been live since yesterday.
Hm, loads fine for me now. The connection was just timing out.
I had been running the site on a fly.io instance in a single region, moved it to netlify to avoid these instability issues (which other people mentioned as well)
My guess is that in the source code it’s implemented in HTTP status code order (since that’s a fairly reasonable choice), and the autogenerated documentation doesn’t sort the method names. I think this is more a bug with the documentation generator.
It looks like your guess is correct.
https://hapi.dev/module/boom/api/?v=10.0.0#http-4xx-errors
After re-reading the blog post, it seems the author is aware of that as well, although it’s not entirely clear what author had in mind at first:
It’s interesting to me that preserving the seed and adding/removing tags will still generate a similar image. I would have expected that something would cause the RNG to “diverge” at some point (calling
.next()
a different number of times).I guess it’d make sense if the seed is only used to generate the initial noise image that diffusion iterates on, and from there the process is entirely deterministic. Maybe that’s how it works.
Yes. Only the initial noise is random, the sampling / denoising process is entirely deterministic. (I ported Stable Diffusion to Swift: https://github.com/liuliu/swift-diffusion).
Wow. This is such a small comment with huge implications. Given the new apple silicon architecture, this is going to be really impressive once you get to packaging this up into an app. I haven’t built it myself (yet), but I’m thoroughly impressed that you ported it!
Thanks! This is not that big of a deal ATM. Some people already ported the model using PyTorch -> CoreML conversion tools to potentially run in an app. However, I do believe my approach would be better for memory usage as well as new features that requires training (Textual Inversion / Dreambooth).
This might be a viable approach if you’re mainly doing trigonometry functions. But if you’re maxing that with calculus, I think the costs will outweigh the benefit.
Reminds me a bit of Norman Wildberger’s trigonometry.
I think if you are strong enough at math to be able to handle the wide-ranging consequences of reimagining the foundations, you probably don’t need the benefit, and if you aren’t, then you probably don’t want to put yourself in a position where there are only one or two books you can use. These iconoclasts are right that you can reconceptualize everything if you want to put in the work. But the proselytization of it I question.
you’re just using a function mysin instead of sin, where mysin(t) = sin(2 pi t). There wont be any problems with it.
The derivative of sin(t) is cos(t). The derivative of mysin(t) is not mycos(t), it’s 2 pi mycos(t).
Does such derivatives actually come up in game engine code? How often?
Remember that Casey Muratori was talking specifically about code, most notably game engines. It’s not about reimagining all of maths. It’s about reimagining a little part of game engine code. Something that some libraries have already done, since apparently half turns are already a thing in some trigonometric libraries.
That’s why fernplus specifically said if you’re mixing it with calculus.
I guess one place where it might come up is small-angle approximations: when measured in radians, sin(x) is about x, and cos(x) is about 1-x^2/2.
Ah, my bad.
I’ve seen in HN a comment explaining that using radians is real helpful for symbolic manipulation, most notably because it makes sin() and cos() derivative of each other (just like exponential is a derivative of itself). That same comment however noted that it didn’t help one bit with most numerical applications implemented in computer programs.
It comes up anywhere that dot products are used, because dot products can be interpreted as cosines. First and second derivatives are used when tracing/casting rays, in particular.
Ah, and when you derive radians based cosines you avoid multiplying by a constant there. Makes sense.
For anyone else who struggled to confirm that in their heads, Wolfram Alpha agrees that d/dt (sin(2 π t)) = 2 π cos(2 π t), which is equivalent to 2 π mycos(t).
The chain rule:
(f∘g)′(x) = g′(x)f′(g(x))
Substituting:
f(x) = sin(x)
g(x) = 2πx
mysin = f∘g
Gives:
mysin′(x) = 2πcos(2πx)
Hope this helps.
And if you’re a game designer trig functions are almost certainly most of what you’re actually doing, because that’s what you need to do to calculate what polygons appear where.
Finishing touches on an idle/incremental game, and then working on my next project. The eglot Emacs LSP library looks cleaner than LSP.el, but it only supports one server per file. So I’m going to try to write a “server” that basically muxes a bunch of other servers together.
Betteridge’s law of headlines strikes again.
Not really, Betteridge’s Law is better applied to headlines like Will there every be a better VCS than Git?
By assuming the answer to the headline in question is the default “No”, you’re basically assuming Git will never be surpassed.
That makes me sad. :-(
Honestly I’m of the opinion that git’s underlying data model is actually pretty solid; it’s just the user interface that’s dogshit. Luckily that’s the easiest part to replace, and it doesn’t have any of the unfortunate network effect problems of changing systems altogether.
I’ve been using magit for a decade and a half; if magit (or any other alternate git frontends) had never existed, I would have dumped git ages ago, but … you don’t have to use the awful parts?
For what it’s worth, I do disagree, but not in a way relevant to this article. If we’re going to discuss Git’s data model, I’d love to discuss its inability to meaningfully track rebased/edited commits, the fact that heads are not version tracked in any meaningful capacity (yeah, you’ve got the reflog locally, but that’s it), that the data formats were standardized at once too early and too late (meaning that Git’s still struggling to improve its performance on the one hand, and that tools that work with Git have to constantly handle “invalid” repositories on the other), etc. But I absolutely, unquestionably agree that Git’s UI is the first 90% of the problem with Git—and I even agree that magit fixes a lot of those issues.
The lack of ability to explicitly store file moves is also frustrating to me.
Don’t forget that fixing capitalization errors with file names is a huge PITA on Mac.
I’ve come to the conclusion that there’s something wrong with the data model in the sense that any practical use of Git with a team requires linearization of commit history to keep what’s changing when straight. I think a better data model would be able to keep track of the history branches and rebases. A squash or rebase should include some metadata that lets you get back the state before the rebase. In theory, you could just do a merge, but no one does that at scale because they make it too messy to keep track of what changed when.
I don’t think that’s a data model problem. It’s a human problem. Git can store a branching history just fine. It’s just much easier for people to read a linearized list of changes and operate on diffs on a single axis.
Kind of semantic debate whether the problem is the data model per se or not, but the thing I want Git to do—show me a linear rebased history by default but have the ability to also show me the pre-flattened history and the branch names(!) involved—can’t be done by using Git as it is. In theory you could build what I want using Git as the engine and a new UI layer on top, but it wouldn’t be interoperable with other people’s use of Git.
It already has a distinction between
git log
,git log --graph
andgit log --decorate
(if you don’t delete branches that you care about seeing). And yeah, you can add other UIs on top.BTW: I never ever want my branch names immortalized in the history. I saw Mercurial do this, and that was the last time I’ve ever used it. IMHO people confuse having record of changes and ability to roll them back precisely with indiscriminately recording how the sausage has been made. These are close, but not the same.
git merge –no-ff (imo the only correct merge for more than a single commit) does use the branch name, but the message is editable if your branch had a useless name
None of those show squashes/rebases.
They’re not supposed to! Squashing and amending are important tools for cleaning up unwanted history. This is a very important ability, because it allows committing often, even before each change is final, and then fixing it up into readable changes rather than “wip”, “wip”, “oops, typo”, “final”, “final 2”.
What I’m saying is, I want Git for Git. I want the ability to get back history that Git gives me for files, for Git itself. Git instead lets you either have one messy history (with a bunch of octopus merges) or one clean history (with rebase/linearization). But I want a clean history that I can see the history of and find out about octopuses (octopi?) behind it.
No. The user interface is one of the best parts of Git, in that it reflects the internals quite transparently. The fundamental storage doesn’t model how people work: Git reasons entirely in terms of commits/snapshots, yet any view of these is 100% of the time presented as diffs.
Git will never allow you to cherry-pick meaningfully, and you’ll always need dirty hacks like rerere to re-solve already-solved conflicts. Not because of porcelain (that would have been solved ten years ago), but because snapshots aren’t the right model for that particular problem.
How many people do all their filesystem work with CLI tools these days? Why should we do it for a content-addressable filesystem with a builtin VCS?
Never heard anyone complain that file managers abstract mv as “rename” either, why can’t git GUIs do the same in peace?
At least one. But I also prefer cables on my headphones.
Oh thank goodness, There’s two of us. I’m not alone!
Oh, fun.
Does anyone know what the security issue that they mention is?
Missed opportunity to call it
coma
since system tools can’t be six characters long.Edit: this thing is really(!) cool, makes me want to find use for it.
Wait, what?
No clue. Maybe they are referring to the super old 8 character limitation on FAT? https://en.wikipedia.org/wiki/8.3_filename
https://unix.stackexchange.com/a/214801
Not into category theory and crossword puzzles?
… oh, like co-make!
Ding ding
Hm, it seems to me that assertion discussion which doesn’t touch on ideas from https://www.sqlite.org/assert.html is misleadingly incomplete.
In large, non-batch programs which change over time, some assertions are bound to fire in production after subtle refactors. With the today’s typical assertion machinery, this leads to people either not using assertion at all out of caution, or disabling them for prod builds (which in turn means that asserts are what the programmer is wishing to be true, not something which actually is true empirically).
The SQLite idea that you want assertions that abort in testing, but recover via short-circuiting (and login, if appropriate) in production deserves to be better known, and better enshrined in standard libraries.
The idea of using
testcase(cond)
for test coverage is great.But I’m not sure about utility of short-circuiting
if never(cond)
. If your assumption that something never happens is violated, that must be a serious issue. It smells like a recipe for having completely untested “error handling” paths that will cause cascading damage if they ever run.Not necessary. Big programs usually have a boatload of features, the majority of those are not essential/critical for operation, and there’s usually some kind of a good recovery boundary anyway. Every large reliable system is Erlang in the limit :-)
It is true that the bail-out paths would be untested by definition, so you always want them to be essentially
return None;
or some such.I think the under-articulated point is that “cascading damage” is often not how software works: you don’t have a giant, intricate clockwork where every detail depends exactly on every other detail, we are simply not good enough to make those kinds of architectures work. The reality is usually more like a Linux kernel, which has a relatively narrow core to it, and a massive amounts of drivers of varying code quality.
Is this an accurate summary of your position?
I think this makes sense, but it doesn’t really address library code. A data structure library can’t possibly know whether or not it’s core to the program it winds up running in; should it use asserts or recoverable failure?
Yeah, that’s a good summary! Couple of more details:
For libraries, situation is trickier. If you do data structures, you have a chance to exhaustively (property based testing/fuzzing) test them, and usually data structures are small, so this case probably falls into “purely local assert” category. But this also is an easy case
A case which helped me to conceptualize the library issue recently is the SVG rendering library. It could have a simple interface like this:
But internally rendering SVG I would imagine is quite hard and fiddly and is likely to have some latent bugs somewhere. But also, as a user of this library, I also obviously don’t want some bogus SVG to bring my whole process down. So I think we do want to ensure(ideally, statically) that this library doesn’t have panicking paths anywhere.
LLVM has suffered from this a lot over the years. LLVM is intended to be usable as a library, but most of the consumers are programs that take a single trusted input, produce output, and then exit. In a lot of places, LLVM uses asserts rather than reporting errors for invalid inputs. This is absolutely fine for, say, clang: if clang generates invalid LLVM IR (or uses the LLVM support APIs incorrectly) then there’s a clang bug and providing a stack trace and asking users to file a bug report is probably the simplest thing to do. But then you try using LLVM somewhere like a WebGL stack and now you’re forced to accept input that is written by malicious actors specifically to try to exploit bugs in the compiler and if you crash then you may take down the entire browser renderer process with you. If you build without asserts, then the code doesn’t have proper error handling and so the attacker can get it into an invalid state. There’s been a lot of work over the years to try to introduce better error handling.
I was hoping to see one that was easily extensible in Rust, both interactively, and via plugins and config files. But I guess that’s a big ask for a language like Rust.
For me, that’s the killer feature that keeps me on Emacs.
Yeah, I think doing interactive Rust would be hard. If I was going to make something as customizable as emacs, I’d embed a scripting language. Maybe JS, maybe Lua.
No tutorial, but I can tell you my experience.
With a few lines of configurations to enable rust-mode and eglot (the Emacs client that speaks to rust-analyzer), I have a very comfortable editing experience. I have the usual syntax highlighting, auto-indentation (plus
cargo fmt
on save if you want), jump-to-def, find-reference, etc. There’s limited refactoring support with arename
functionality, that I use maybe once a week. The only thing that’s annoying, but I’m not sure it’s an Emacs thing, is that switching from one branch to another causesrustc
to recompile a bunch of things and the computer becomes slow for a while, with all the CPUs maxed out.I think LSP has done wonders for emacs, personally, Doing language analysis in the editor was a bad idea. I think tree-sitter is the logical next step of this: the editor should know how to display highlighted text, but it shouldn’t have to know the highlighting rules.
If I made high quality public tools I would absolutely give them the most embarrassing names possible. Vicious Pumpkin. Daktaklakpak. “Tau is the correct circle constant”. ZZZ, but the first Z must be the American Z and the other two must be the British Zed. This pronunciation will be in the official standard.
I really think that I like K-9 Mail in part because of its name and logo. 😊 Will be sad to see it switch to Thunderbird.
I mean, when it comes to “embarrassing” you can’t really outdo Coq.
Coq is proudly Gallic.
Likewise, in France the e-mail program Pine is proudly phallic.
There’s a reason the French refer to the byte as an ‘octet’, because ‘bite’ is basically ‘cock’…
That’s not embarrassing, that’s shameful.
Back when I was hopping between different projects a lot, Lando was something I used every day. I don’t know what its governance structure is and the keyboard interaction point isn’t really applicable (it’s container configuration software) but for me it checks the rest of the boxes. It made the common cases for reproducible dev environments ridiculously easy, and it naturally supports going from simple to arbitrarily complex environments as your project evolves. Plus the name rocks.
I feel like another important aspect, related to but distinct from good docs, is a healthy, supportive community. Lando’s lead author, Mike, is friendly, hilarious, and very committed to having fun with his software. The community around it reflects that.
Yes, absolutely. If the author of a piece of software is a jerk I’m much less enthused about using it. It won’t necessarily stop me (one of the core parts of my setup environment is written by someone I strongly dislike), but it’ll grate on me.
Yep. There’s a programming language (that will go unnamed here) I refuse to use despite its often-impressive technical merits due to (1) its instability and quirks often requiring me to head to IRC, the issue tracker, whatever, and (2) rarely seeing positive interactions from the maintainers of that language in such encounters (ones I’m involved in, or ones others are involved in). It’s a real thing.
Not being a jerk is free. Maybe sometimes challenging (don’t we all want to go bananas on someone from time to time), but free. My time, however, is not, and my time is worth too much to deal with toxic communities.
Now do linear logic
Classical linear logic is not much different. We’ll skip the “exponential modalities”; this means that we must use every one of our premises exactly once. The axioms are different, too, because linear logic has two different kinds each of conjunction and disjunction (four connectives total.) The article’s trick for changing implication into disjunction would still work, but it would always give one kind of disjunction, called the “multiplicative” disjunction. (The other kind is called “additive”.)
We won’t have Boolean algebra, but C* algebra. It’s no longer about what is true. I don’t know if you’re familiar with the vending-machine interpretation of linear logic, which might make it easier to understand exactly what is being computed. A premise is not a formula for a hypothetical truth, but a recipe for a hypothetical physical resource which cannot be copied or dropped: a molecule, a can of soda, a shipping container, a region of memory, ingredients in a cooking recipe, etc. The connectives could be interpreted as follows:
That last one is weird, but it’s connected to implication: having a resource is like the negation of needing a resource, so an alternative way to look at multiplicative disjunction is that you have a recipe which needs a resource, and using the recipe will allow you to have a resource; you have a recipe which transforms one resource into another resource. This is “linear implication”.
The disjunctive cases remind me a bit of algebraic datatypes. Additive disjunction corresponds to destructing a datatype by handling all possible constructors; multiplicative disjunction corresponds to pulling out the parameters to a constructor by proving that that’s the constructor for the value you actually have.
I really like Swift’s solution. Can’t Rust just adopt it in a future edition?
Would be a massively breaking change. I think rust is committed to not doing those
My understanding is that adding a new feature is not a breaking change. I think
.id
is currently invalid syntax, so making it valid wouldn’t be breaking.Albeit one that could probably be automatically applied to legacy code with a linter-like tool.
That’s not enough. There’s still churn and outdated books/tutorials/QA.
RFC 2005 (match ergonomics) had similarly large churn problem to my proposed enum variant pattern syntax, but Rust still did it.
Match ergonomics was subtle. It left the old syntax valid, so there were no source code changes required, and it didn’t add any new tokens/sigils/keywords, so anyone who already knew the hard way could figure out the easy way, or just ignore it and keep writing “unergonomic” matches.
To clarify,
.id
syntax addition also would leave the old syntax valid.I’ve definitely always been jealous of Swift’s enum syntax.
Huh, I hadn’t thought about units as quotients. That’s neat.
My personal thoughts on units: any quantity is a combination of both a number representing the magnitude and a representation of the ‘exponent’ of each base unit, called their dimensions. So for example, 2.5 kg*m^2/s^2 has a magnitude of 2.5 and dimensions of [mass: 1, length: 2, time: -2]. “Raw” numbers just have all their dimensions set to 0.
You can apply a function f to an arbitrary quantity with dimensions only if it’s homogenous: there’s some k such that for all numbers a, f(ax) = a^k*f(x). Otherwise you’re not scale invariant: sin(1 m) vs sin(100 cm) vs sin(1 cm), that sort of thing.
This is why you can multiply unitary quantities but not add them: multiplication is linear, but addition isn’t (since 2x + y != 2(x + y)).
I believe this is how F# does it. The Rust crate uom also does this with its
Dimension
trait.