Deciseconds isn’t a crazy unit of time - it’s also used for timeouts in some Linux bootloaders, and it is a convenient unit for describing quick but still human-perceptable amounts of time.
The real problem isn’t that the git configuration file interpreted integer literals as deciseconds, but that it interpreted them as booleans at all (repeating an ancient poor design decision of C). I think the ideal configuration language for this option would have something like a lightweight tag system: help.autocorrect = true, help.autocorrect = false, help.autocorrect = (timeout 10), or something like that.
Milliseconds also isn’t crazy - and people are generally used to it. timeout_ms should basically be clear to 90% of people who have ever edited a config file.
Including the unit in the name really should be standard by now!
Though “min” for minute is confusing without the context and a good reason to use seconds instead IMO.
Yeah, I think it’s wise to stick to prefixes for powers of a thousand in human-readable presentation by default unless there’s a particular reason not to, like engineering notation does. And for a database field, I’d prefer the unadjusted SI unit if possible.
Out of the non-thousand prefixes (centi-, deci-, deca-, hecto-), the most standard uses are probably centimeters and decibels, though each field has their own conventions that may weigh in heavily as well.
No matter how many *seconds it takes, automatically running a different command than requested seems to me like horrible UX.
Honestly, “prompt” is probably what most people would find the most reasonable, rather than a specific amount of time to wait for you to cancel the command.
Reacting “in 100ms” isn’t impossible, because you have more than 100ms even if the sleep is 100ms. The Formula One drivers are reacting to external events. However, you’re the one that typed the command, so you can start reacting as soon as you hit the y key or even as soon as you notice yourself reaching for it. So by the time you hit return, the impulse to cancel the command can already be well on its way through your brain.
This is a nice example of why bools should be required to be written as “true” and “false”, rather than “1” and “0”. If they were, this bug wouldn’t have happened because the user would have written “true” instead of “1” (from habit), gotten an error, and looked up the docs.
So, the reason why it waits 100ms for David is that at some point he presumably learned about this setting, quite reasonably assumed that it was a boolean and set it to what Git config also generally considers to be a ‘true’ value in order to enable it:
Is that a reasonable assumption? Generally you use true and false for booleans in the Git config.
Yes is is quite a reasonable assumption. If he had done what you say “generally” happens, he would obviously have ended up with a different result. Ergo there is no way he did what you think the most “general” action would be, so the alternatives are either as the article suggests is likely or that he actually meant 0.1s, which just seems less likely and therefore a less reasonable assumption than the one the article went with.
I thought the autoaorrect options were lacking and went on a journey to get the prompt setting added (based on someone else’s abandoned patch) a few years ago. It was an interesting experience because I had never collaborated on a patch over a mailing list. But everyone was superhelpful and it was really cool to see the setting get added!
I’m thinking about the type of scenario where I know I have mistyped the command, e.g. because I accidentally hit return prematurely, or hit return when I was trying to backspace away a typo. In those situation I reflexively follow return with an immediate ctrl-C, and might be able to get in before the 100 ms timeout. So it’s not entirely useless!
Yes, probably they wanted to use deciseconds as they give more flexibility and allow to express quantities that may not be enough fine grained if you were using whole seconds.
If using a “finer-grained than seconds” time unit, milliseconds or nanoseconds are the common choices. Note that DHH himself in the quoted-tweet says “100ms”, not “1 decisecond”. If you noticed that a feature that was configured with a 1 was waiting for 1ms, it’d be easier to determine the causal link.
It doesn’t look to be documented but logically the only way to interpret “wait -100 milliseconds” is to wait for 0 milliseconds and then run the command :)
Deciseconds isn’t a crazy unit of time - it’s also used for timeouts in some Linux bootloaders, and it is a convenient unit for describing quick but still human-perceptable amounts of time.
The real problem isn’t that the git configuration file interpreted integer literals as deciseconds, but that it interpreted them as booleans at all (repeating an ancient poor design decision of C). I think the ideal configuration language for this option would have something like a lightweight tag system:
help.autocorrect = true,help.autocorrect = false,help.autocorrect = (timeout 10), or something like that.Milliseconds also isn’t crazy - and people are generally used to it.
timeout_msshould basically be clear to 90% of people who have ever edited a config file.Including the unit in the name really should be standard by now!
Though “min” for minute is confusing without the context and a good reason to use seconds instead IMO.
Yeah, I think it’s wise to stick to prefixes for powers of a thousand in human-readable presentation by default unless there’s a particular reason not to, like engineering notation does. And for a database field, I’d prefer the unadjusted SI unit if possible.
Out of the non-thousand prefixes (centi-, deci-, deca-, hecto-), the most standard uses are probably centimeters and decibels, though each field has their own conventions that may weigh in heavily as well.
No matter how many *seconds it takes, automatically running a different command than requested seems to me like horrible UX.
This.
Reacting “in 100ms” isn’t impossible, because you have more than 100ms even if the sleep is 100ms. The Formula One drivers are reacting to external events. However, you’re the one that typed the command, so you can start reacting as soon as you hit the
ykey or even as soon as you notice yourself reaching for it. So by the time you hit return, the impulse to cancel the command can already be well on its way through your brain.This is a nice example of why bools should be required to be written as “true” and “false”, rather than “1” and “0”. If they were, this bug wouldn’t have happened because the user would have written “true” instead of “1” (from habit), gotten an error, and looked up the docs.
Is that a reasonable assumption? Generally you use
trueandfalsefor booleans in the Git config.Yes is is quite a reasonable assumption. If he had done what you say “generally” happens, he would obviously have ended up with a different result. Ergo there is no way he did what you think the most “general” action would be, so the alternatives are either as the article suggests is likely or that he actually meant 0.1s, which just seems less likely and therefore a less reasonable assumption than the one the article went with.
I thought the autoaorrect options were lacking and went on a journey to get the
promptsetting added (based on someone else’s abandoned patch) a few years ago. It was an interesting experience because I had never collaborated on a patch over a mailing list. But everyone was superhelpful and it was really cool to see the setting get added!I wrote about it if you are curious about more of the details: https://azeemba.com/posts/contributing-to-git.html
I’m thinking about the type of scenario where I know I have mistyped the command, e.g. because I accidentally hit return prematurely, or hit return when I was trying to backspace away a typo. In those situation I reflexively follow return with an immediate ctrl-C, and might be able to get in before the 100 ms timeout. So it’s not entirely useless!
Yes, probably they wanted to use deciseconds as they give more flexibility and allow to express quantities that may not be enough fine grained if you were using whole seconds.
If using a “finer-grained than seconds” time unit, milliseconds or nanoseconds are the common choices. Note that DHH himself in the quoted-tweet says “100ms”, not “1 decisecond”. If you noticed that a feature that was configured with a
1was waiting for 1ms, it’d be easier to determine the causal link.The list of possible options is missing one that I’ve had for as long as I can remember (and I guess I must have pulled this from some documentation:
That I think does the corrected command immediately.
It doesn’t look to be documented but logically the only way to interpret “wait -100 milliseconds” is to wait for 0 milliseconds and then run the command :)