1. 92
  1. 23

    Instead, please put the unit in the function name.

    (sleep-seconds 300)

    1. 5

      Too simple. People need to complicate and add yet another bit of complexity such as purpose created types, nammed parameters and whatnot that ultimately needs to be looked up in the documentation.

      Funny that the obvious solution isn’t even mentioned because people don’t even consider a design flaw from their all mighty favorite languages.

      1. 2

        I hate types

        1. 5

          And yet types exist whether they are declared or not, even in Lisp-y languages! Types are a characteristic of the universe we inhabit. I wouldn’t take my car to a hair salon to have the oil changed, and I know not to do this without actually trying it because the types don’t work out.

          1. 3

            Right. I’m working on a new essay on this but in short, I tried to design a dynamically typed lisp with maximal applicability of procedures—as in, “reverse” should be able to reverse a string, a vector, or a list, as opposed to having “string-reverse” or “vector-reverse” defined separately. I found that in order to implement that, I needed an isa-hierarchy of types (or, rather, derive an isa-graph from a tag cluster, kinda duck typing style). For example, strings, vectors, and lists are all “sequences”. So, in that sense, types are great.

            In order to not have to deal with types (as much) on the app level, I really do need to do them carefully (at the primitives level). So types aren’t useless. I still hate them though and languages that make it a point to do everything through types. Dependent types are the worst.

            I don’t wanna type annotate and I don’t wanna limit procedures to only compile on certain types. I don’t want the compiler to prevent you from taking your car to the hair salon.

            1. 3

              APL achieves something very similar to what you are trying to do. Here’s a talk making the point (I think) that APL is so expressive, types would just get in the way.

              Does APL Need a Type System? by Aaron W Hsu https://www.youtube.com/watch?v=z8MVKianh54

              NB: I’m in general very type-friendly (as is the speaker it seems), but that just makes this perspective all the more interesting to me.

              1. 1

                I love APL ♥

              2. 3

                I don’t want the compiler to prevent you from taking your car to the hair salon.

                But why wouldn’t you want that? The hair salon will do nothing useful with the car, and might even mess it up!

                What if I call reverse on an integer? I’d love to find out that my mental model is wrong right after I type it (or, at worst, when I run the compiler) rather than when I run the code and get some inscrutable error in another part of my program because reverse silently did nothing with the value (or, even worse, treated the int as a bit string).

                The fact that some languages have string-reverse and vector-reverse is more a function of the language and less a function of the types. You can easily define a generic reverse function in many languages and still get static types (and, if the language supports it, type inference so you don’t need to write out the types). There are also languages that support things like extension methods so that you can add interfaces to existing types.

                1. 1

                  Sometimes I feel like this is, figuratively, an ergonomics issue. Some programmers feel more comfy with the kind of checks and balances you’re talking about and others (like me) hate them. I’m kinda glad that there are both kinds of languages.

                  We are veering in to off-topic because the thread is about sleep time units.

                  Let’s say I annotate 10 types. 2 of them find bugs, and then there are three or four bugs that weren’t related to types. Then I have done meaningless work 8 times, and put myself in a workflow or state of mind that make those three or four bugs way harder to find, all for the sake of finding two bugs (the kind of bugs that are often obvious on first run anyway). Instead, if I don’t have types, I check everything at the REPL and make sure it gives sane outputs from inputs.

                  Like, say I wanna make the procedure

                  (define (frobnicate foo)
                    (+ foo (* 6 7)))
                  

                  If I meant to type (* 6 7) but accidentally type (* 6 9), type annotation wouldn’tve caught that. Only testing can.

                  But why wouldn’t you want that? The hair salon will do nothing useful with the car, and might even mess it up!

                  Maybe they can drive around in the car and give haircuts all over town.

                  A lot of my best decisions as a programmer have been me realizing that the same procedure is much more general than I first thought (and then giving it a new, more general name). I like to type as I think, refactor and mold into the perfect parameter signature. (“Oh! I don’t need to pass in the list here, if I only pass the last pair, this function could share code with foo!”)

                  What if I call reverse on an integer?

                  My language converts it to decimal and reverses the digits. Useful for calculating probabilities for Unknown Armies.

                  I’d love to find out that my mental model is wrong right after I type it (or, at worst, when I run the compiler) rather than when I run the code and get some inscrutable error in another part of my program because reverse silently did nothing with the value (or, even worse, treated the int as a bit string).

                  So this is why humanity haven’t found the perfect language yet. Some people like different things. I’m not here to stop the type fest that’s been going on. Cool things might be invented from that camp down the line, it’s good that people are trying different things. If type inference could be made better so we don’t have to annotate…

                  1. 1

                    Let’s say I annotate 10 types. 2 of them find bugs, and then there are three or four bugs that weren’t related to types. Then I have done meaningless work 8 times, and put myself in a workflow or state of mind that make those three or four bugs way harder to find, all for the sake of finding two bugs (the kind of bugs that are often obvious on first run anyway).

                    Can you expand on this more? How do types make it harder to find non-type related bugs? In my experience, by completely eliminating an entire class of bugs (that aren’t always obvious catch-on-the-first run bugs, especially if you have a really nice type system!) it gets easier, not harder, to identify logic errors.

                    1. 3

                      As an analogy, overly relying on spell checkers can make some people miss things that are still legit spellings but are the wrong words in that particular sentence, like effect/affect.

                      But, it’s worse than that since (and I’m complaining about type annotation, not type inference) you need to enter the type info anyway. It’s “bugfinding through redundancy”. Sort of the same philosophy as “write a test for every line of code” but more rigid and limited. Of course reduntantly writing out what you want the function to accept and return is going to catch some bugs.

                      If you like this sort of type checking, you’re not alone. A lot of people love them, and ultimately there’s no need to argue. Time will tell if that style of programming does lead to overall fewer bugs, or at least does so for programmers of a certain taste, and if so, that’s fine by me. I’m not gonna take your ML away.

                      But as my “42 is odd” example shows, I’m not too happy with the whole “statically typed programs are Provably Correct” hype leading into the same hopeless dead end as Russell and Whitehead did a hundred years earlier.

                      Coming from C and Pascal, when I first discovered languages that didn’t have type annotations in the nineties (like awk, perl, and scheme) I felt as if I had found the holy grail of programming. No longer would I have to write out boring and obvious boilerplate. It was a breath of fresh air. Other people obviously feel differently and that’s fine.

                      For me, it seems that a lot (not all, but some) of the things a good type annotation system helps you with are things you don’t even need to do with dynamically typed languages. It also feels like with a type annotated language, there’s a catch-22 problem leading you to have to basically write the function before you write it (maybe with pseudocode or flowcharts) just so you can know what type signatures to use.

                      I felt that wow, a cons pair of car and cdr can express data in so many ways, I can just immediately write the actual logic of my code. Whereas when I worked as a Java dev (don’t worry, I’ve looked at modern type languages too, like Haskell) we had to slog through writing types (classes and instances), UML diagrams, wall full of post-its, architecture, ConnectionKeepingManagerFrobnicator.new() etc. With Scheme it was just, all that red tape just fell away. No need for pseudocode since I could just send whatever I was thinking into the REPL.

                      The type community loves the expression “to reason about the code”. Well, to me it’s a heck of a lot easier to reason about the code when it’s a fifth the size. (Sexps help too since it’s easier for me to grok a code tree than a linear sequence of unparsed tokens of code data.)

                      Obviously, type fans have had similar epiphanies but in the other direction, falling in love with static just like I fell in love with dynamic. And that’s cool. Let the deserts bloom. Humanity can’t be betting all of its eggs on my approach. I see the type craze as an experiment. One that might even be right. So please, go ahead.

                      I’m just really, really grateful that it’s not me who have to slog through it. I can sit under the cork tree sniffing dynamically typed flowers.

                      1. 2

                        Uh, wait, why did I get baited into writing all that when I see now that I already answered it in what you snipped out:

                        Instead, if I don’t have types, I check everything at the REPL and make sure it gives sane outputs from inputs.

                        Like, say I wanna make the procedure

                        (define (frobnicate foo) (+ foo (* 6 7)))

                        If I meant to type (* 6 7) but accidentally type (* 6 9), type annotation wouldn’tve caught that. Only testing can.

                        1. 1

                          hm, that doesn’t answer my question at all but it your longer post did, so thanks.

                          I think the point about “boilerplate” is pretty tired and not even true any more with how good type inference is nowadays. Yes, Java involved/involves a lot of typing. No, it’s no longer the state of they art.

                          It’s true that in the case where you use the wrong term that has the same type as the correct term, the typechecker will not catch this. Not having types is also not going to catch this. I’m going to see the error at the same exact time with both approaches. Having a REPL is orthogonal to having types, so I also often check my Haskell functions at the REPL.

                          I see the type craze as an experiment.

                          Calling an entire field of active research a craze is a little upsetting.

                          1. 1

                            I am complaining about annotated type systems specifically, which I clarified nine times. Inference type systems are fine.

                            Not having types is also not going to catch this.

                            The idea is that checking at the REPL will find it.

                            I’m going to see the error at the same exact time with both approaches. Having a REPL is orthogonal to having types, so I also often check my Haskell functions at the REPL.

                            Me too. Which made me feel like the whole type thing was a little unnecessary since I needed to do just as much checking anyway.

                            (As noted elsewhere in this thread, I’ve changed my tune on types a little bit since I realized I do need an isa-taxonomy for primitives. I.o.w. to get rid of types, I’m gonna have to define types.)

                            Calling an entire field of active research a craze is a little upsetting.

                            It’s more the whole dependent type / provably correct thing that’s a li’l bit of a craze, not the entire field of type research as a whole. As I wrote in my essay, types have a lot of benefits including serious performance gains, and that’s awesome. It’s the whole “it fixes bugs!” that gets a li’l cargo culted and overblown sometimes. Not by you, who do understand the limits of type systems, but by hangers-on and newly converted acolytes of typing.

                    2. 1

                      Lots of points from your arguments can be achieved by using generic types, and everything would work safely, giving the programmer quick feedback if the types work for the particular combination. No need to guess and check in the runtime.

                      My language converts it to decimal and reverses the digits. Useful for calculating probabilities for Unknown Armies.

                      So what would be the output of 2.reverse() * 2?

                      1. 2

                        Four.

                        1. 1

                          I’m wondering if that would also be the case with "2".reverse() * 2. Because if the output would be 4, then I’d wonder what would be the output of "*".reverse() * 2 would be. I hope it wouldn’t be **.

                          No matter what the answers are, I’ve already dedicated a lot of time to decode how some basic operations work. With types, I would have this information immediately, often without needing to dig through the docs.

                          1. 1

                            4 and ** respectively.

                            1. 2

                              All kidding aside, the idea isn’t to overload and convert but to have a consistent interface (for example, reversing, or walking, or concatenating strings and lists similarly) and code reuse. I’m not insisting on shoehorning sequence operations to work on non-sequence primitives. Which is why I already said I needed an isa taxonomy of primitive types.

            2. 1

              So sleep(seconds: 1) needs to be looked up in documentation whereas sleep-seconds(1) does not?

              1. 2

                If you language only supports the second, then use the second which is perfectly clear. You would by no means be ina. Situation where lack of language features limit code clarity.

                Notice that while parameters have names in most languages, in many of them you can’t include the name on your code but rather need to pass them in order.

            3. 2

              this way makes the most sense to me, at least for sleep

              1. 1

                It used to be a common sense that sleep uses seconds, until other languages not following that.

                1. 5

                  That’s not how common sense works!

              2. 22

                I really like how Go’s Duration time handles this, letting us do time.Sleep(5 * time.Second)

                1. 14

                  Unfortunately Duration is not a type, but an alias for an integer, so this mistake compiles:

                  time.Sleep(5);
                  
                  1. 10

                    Your point stands about the mistake, but just to clarify the terminology: Duration is a defined type, not an alias (alias is specific Go terminology which means it behaves exactly the same as that type). The reason this mistake compiles is because literals in Go are “untyped constants” and are automatically converted to the defined type. However, these will fail, because s and t take on the concrete type int when they’re defined:

                    var s int
                    s = 5
                    time.Sleep(s)
                    
                    t := 5
                    time.Sleep(t)
                    
                    1. 2

                      My understanding is that Duration*Duration is also allowed?

                  2. 8

                    The thing I dislike the most about Go’s Duration type is that you can’t multiply an int by a Duration:

                    To convert an integer number of units to a Duration, multiply:

                    seconds := 10
                    fmt.Print(time.Duration(seconds)*time.Second) // prints 10s
                    

                    In the example above, the intent is somewhat clear due to the seconds variable name, but if you just want to have something like this:

                    some_multiplier := ...
                    delay := some_multiplier * (1 * time.Second) // won't work
                    

                    You have to convert some_multuplier to time.Duration, which doesn’t make sense!

                    1. 2

                      Can’t you just overload the * operator?

                      1. 3

                        Go doesn’t allow for operator overloading, which I’m kind of okay with. It tends to add complexity for (what I personally consider to be) little benefit.

                        1. 3

                          On the other hand, this is the kind of case that really makes the argument for operator overloading. Having to use a bunch of alternate specific-to-the-type function implementations to do common operations gets tiresome pretty quickly.

                          1. 2

                            So Go has different operators for adding floats and for adding integers? I have seen that in some languages, but it’s nevertheless quite unusual. OTOH, I can see that it reduces complexity.

                            1. 1

                              Go has built-in overloads for operators, but user code can’t make new ones.

                              It’s similar to maps (especially pre-1.18) that are generic, but user code is unable to make another type like map.

                          2. 2

                            Go doesn’t have operator overloading

                          3. 1

                            I agree it is annoying. Would a ‘fix’ be to alter/improve the type inference (assuming that some_multiplier is only used for this purpose in the function) so that it prefers time.Duration to int for the type inferred in the assignment?

                            I’m not sure it would be an incompatible change - I think it would just make some incorrect programs correct. Even if it was incompatible, maybe go2?

                            1. 1

                              While I do think Go could do more to work with underlying types instead of declared types (time.Duration is really just an alias for int64, as a time.Duration is just a count of nanoseconds), it does make sense to me to get types to align if I want to do arithmetic with them.

                              My perfect world would be if you have an arbitrary variable with the same underlying type as another type, you could have them interact without typecasting. So

                              var multiplier int64 = 10
                              delay := multiplier * time.Second
                              

                              would be valid Go. I get why this will probably never happen, but it would be nice.

                              1. 3

                                That’s how C-based languages have worked, and it’s a disaster. No one can keep track of the conversion rules. See https://go.dev/blog/constants for a better solution.

                              2. 1

                                If you define some_multiplier as a constant, it will be an untyped number, so it will work as a multiplier. Constants are deliberately exempted from the Go type system.

                            2. 9

                              Putting units in the variable name is a special case of Apps Hungarian, which is a technique for emulating a real type system in C.

                              Joel: https://www.joelonsoftware.com/2005/05/11/making-wrong-code-look-wrong/

                              1. 14

                                Instead please make units into types and skip the names.

                                1. 29

                                  This was one of the things the article suggested.

                                2. 8

                                  I usually do something like this

                                  sleep 60 * 3 # 3 minutes

                                  1. 6

                                    In python, I tend to put some dummy constants at the top of module for readability. E.g.,

                                    KB = 1024

                                    MB = 1024 * KB

                                    buffer = 128 * MB

                                    1. 5

                                      As noted by the author, this also applies to money. That way, you don’t end up adding € and £, or worse, dollars and seconds.

                                      1. 4

                                        Erlang gives you well-named functions in the timer module which return milliseconds, which I prefer over adding units to names unless absolute necessary.

                                        I usually do this in Elixir for managing Ecto timeouts, etc:

                                        Repo.all(from x in Table, timeout: :timer.seconds(60))
                                        

                                        Because all these functions return the number of milliseconds as an integer, you can do basic math with these too unlike in Go:

                                        three_and_a_half_hours = :timer.minutes(30) + :timer.hours(3)
                                        
                                        1. 2

                                          Apple use seconds as a Double instead of milliseconds as an Int, so you can easily specify milliseconds as necessary while having slightly more friendly seconds for larger values.

                                          1. 1

                                            And lose the linearity of the scale precision and buy into a whole class (pun intended) of billion dollar problems from using floating points.

                                            1. 1

                                              A Double has more precision than a millisecond while also stretching to centuries. :)

                                              1. 1

                                                An IEEE double cannot represent “1/1000”.

                                                1. 1

                                                  Can the timers in your system do that?

                                                  1. 1

                                                    Yes. They use integers. Don’t yours?

                                                    1. 1

                                                      I was thinking of the fact that the underlying quartz isn’t tuned to milliseconds, but rather to some unit which made sense to the hardware constructor, so there will be aliasing.

                                          2. 1

                                            This is valid Go: (3 * time.Hour) + (30 * time.Minute)

                                          3. 4

                                            I wish “option 2” was the norm. The algebraic data types (also known as custom types or tagged unions in other languages) implied can, when implemented well, make it possible to build APIs that support the user’s choice of unit or make the unit explicit (if runtime conversions are undesirable). Ian Mackenzie’s work in elm-units and elm-geometry are excellent examples. Many such units are baked into F#. Rust’s refinement type, while not the same as an algebraic data type so far as I can tell, still solves the same problem with similar syntax.

                                            Unfortunately, these types are either not implemented at all in popular languages (as in JavaScript) or are not implemented in a way that can be used for units (C#), or are implemented but not properly opaque (as in TypeScript) or are implemented but cumbersome (as in Python.) Naming as described in “option 1” is the only way I’m aware of to disambiguate in such cases.

                                            1. 3

                                              C++’s std::chrono module has a bunch of these types for various units of time. So you can use types like milliseconds as parameters, and you can use literal-suffixes like 250ms as arguments.

                                            2. 4

                                              This ties in well with the No More Ghosts post.

                                              Your values need to carry context with them if possible. Don’t assume the context that makes sense in your head will make sense to future you, or your users.

                                              1. 6

                                                Put it in the types:

                                                The Haskell library dimensional goes a long way to solving this problem once and for all. https://hackage.haskell.org/package/dimensional

                                                If this sort of thing were in widespread use in standard libraries, it would be wonderful.

                                                1. 4

                                                  I like using type systems to keep programmers on the right path, but we have to be careful not to over-do it either. The job of a programmer ought to be to efficiently transform data; with very invasive types, the job often changes to pleasing the type checker instead. A programmer should create a solution by thinking about what the data looks like, what its characteristics are, and how a computer can efficiently transform it. If a library provides extremely rigid types, the programmers start to think in terms of “what do the types allow me to do?”; if the library tries to address this rigidity by using more advanced features of the type system to make usage more flexible, the programmer’s job is now to deal with increased accidental complexity.

                                                  Looking at the doc for dimensional, I find that the cost-benefit is not one that I would make. The complexity is too high and that’s going to be way more of a problem in the short and long run.

                                                  1. 3

                                                    The job of a programmer ought to be to efficiently transform data; with very invasive types, the job often changes to pleasing the type checker instead.

                                                    I’ve always seen “pleasing/fighting the type checker” as a signal that my problem domain is necessarily complex. If I need to implement a stoichometry calculator, for example, I’d much rather use dimensional or a similar library that allows me to say “we start with grams of propane, we end with grams of water” and let the type checker say that my intermediate equation actually makes sense. The alternative I could see is… what, extensive unit tests and review? Those rarely worked for me in chemistry class ;)

                                                    1. 1

                                                      Another upside of types is that they nudge you toward rendering them correctly as user-facing (or debugger-facing) strings.

                                                    2. 1

                                                      At work we have a custom units library built that wraps the units library. We also provide a bunch of instances for the numhask hierarchy. This pair has felt like a sweet spot. We mostly work with geometric quantities (PlaneAngle and Length), and have had great success lifting this up to vector spaces (e.g., Point V2 Length represents a point, V2 Length is a vector, (+.) lets us combine points and vectors, etc).

                                                  2. 3

                                                    I especially like the suggestion of requiring a timedelta option. Makes a lot more sense to me than the alternatives, really.

                                                    1. 2

                                                      For the example, seconds is assumed, because metric.

                                                      Those functions not using seconds are indeed poorly named.

                                                      1. 4

                                                        That is an arguably weak reason. Computers are real world machines with finite limits. Programming languages have built in functionality designed to maximize utility. Metric international system exists for scientific correctness, not out of a sense of practicality in every day life. What unit would one typically pass to sleep() most of the time? Difficult to answer, some will argue milliseconds or nanoseconds, some will say seconds. Regardless of which, the APIs will make a choice that favors reasonable use of integers. Picking milliseconds or nanoseconds to avoid dealing with floats or require an extra method in the API, is a stronger motivation IMHO.

                                                        1. 8

                                                          I hate to be that guy but due to reasons entirely outside my control (questionable choices as a teenager) the engineering discipline and science of measurement is basically what my BSc in is so, erm,

                                                          the metric system, as it began to be established in the late 19th century, starting with the General Conference on Weights and Measures, was literally about nothing but practicality, in every way there is. It has nothing to do with scientific correctness – all systems, metric and non-metric alike, are “scientifically” correct, as in, their results agree with each other. It really was primarily about the fact that having a diversity of measurement units was really not practical. Various science-y bits got mixed into it on account of relativity but even that is primarily about practical constraints (for reasons that I really can’t get into without making this post even way longer and more boring than it already is :-( ).

                                                          Glossing over old engineering books, from just about any field, is instructive in this regard. There are whole chapters devoted to explaining different conventions (CGS/MKS), defining constants, and unit conversion. In fact, many European electrical engineering textbooks all the way up to the 1960s (and even later in some parts) used to open up with extremely boring and extraordinarily abstract discussions about what a physical quantity is, primary and derived units, laws vs. theorems and a whole bunch of crap that flew out the window once everyone just settled on meters, kilograms, and seconds.

                                                          The whole problem this article discusses stems from the fact that most programming languages simply lack both the capability to annotate a quantity with the unit it multiplies (i.e. you can’t say sleep(100 ms) – not in the languages discussed in the article, in any case) and the ability to repeatedly annotate a numeric value with the physical quantity it represents (i.e. you can’t say sleep(100<Time>) and declare somewhere that Time is measured in seconds or whatever, which is actually a surprisingly common convention among engineering disciplines, but it’s just not obvious).

                                                          IMHO, to make this post at least a little constructive:

                                                          1. Practical experience from other branches of engineering has conclusively shown that you have to annotate something everywhere, otherwise mistakes are made. Annotating numerical values is somewhat costly compiler-wise but is the only thing that actually, reliably, works. Annotating methods is a lot cheaper and easy to retrofit on existing systems but that won’t prevent you from reading seconds from a config file and passing it to a sleep routine that expects sec. Typifying values, but not carrying the type information everywhere is equally nightmarish: readers remain equally unenlightened as to how long sleep(400) will sleep for, whether the type system knows 400 is in seconds or eons, and it opens an entire can of practical worms when dealing with, say, filtering all files older than 30 days based on their creation timestamps.

                                                          2. Seconds may look like a bad choice because of floats but, for what it’s worth, that’s true of any SI unit you’d pick, because computers are discrete systems, whereas SI units are continuous (barring stoner philosophy questions about quantum mechanics). Time intervals are in fact the most favourable case here because, as with all discrete systems, you actually do have a unit you can safely treat as implied: samples. It’s failsafe, and uncertainty-free to measure everything in samples, which for application time intervals would be jiffies, I guess. Unfortunately, that’s really impractical to measure anything with. But if you don’t want to expose jiffies (or CPU cycles?), it’s not really relevant whether you incur some uncertainty because of float conversion, because of a lack of real-time guarantees, or because of insufficient system timer resolution. It would probably be interesting to work out the numbers but eyeballing it, I think sleep(0.003) and sleep_ms(3) will both be about as imprecise on just about any desktop system out there.

                                                          1. 1

                                                            I think you will agree with me that, as science gets more advanced, its mathematical tools and foundations, the metric system in this case, need to follow accordingly in terms of mathematical correctness. In order to be useful in more advanced models.

                                                            This happened last year. Precisely, the metric system was redifined to be built up on known universal constants, rather than things like a chunk of metal stored somewhere. Which was the best we could get 120 years ago or so to fit our practical needs as you well describe.

                                                            But you can find simpler examples on every day life: At what time do you go to work? I am sure you will not include seconds in your reply as such precision is silly in that context.

                                                            it’s not really relevant whether you incur some uncertainty because of float conversion, because of a lack of real-time guarantees, or because of insufficient system timer resolution. It would probably be interesting to work out the numbers but eyeballing it, I think sleep(0.003) and sleep_ms(3) will both be about as imprecise on just about any desktop system out there.

                                                            It is very relevant. I find several bugs per year due to poor use of floats. I have long gave up taking up the discussion at work because people will defend it without understanding the implications of floats… Until things break. Of course those two are not equally imprecise. One is an exact value. The other one isn’t even representable as a float. Your computer will store a value as close to that as it can find. But not that value. It’s not about your program sleeping for exactly 3 milisecs. I agree that in a non real time operative systems, all such bets are off. It’s about basic arithmetic exactness. Each time you make an operation on a float, an approximation is introduced, so you loose the ability to do equality checks. Another problem is resolution. Peple look decimal places and think they have plenty of precision for all cases. But if you would ask most programmers what the resolution is at the center of the scale and on its boundaries, very few have a clear idea. It is obviously not as intuitive as integer types.

                                                            The whole problem this article discusses stems from the fact that most programming languages simply lack both the capability to annotate a quantity with the unit it multiplies (i.e. you can’t say sleep(100 ms)

                                                            How is sleep_seconds() not a clear solution? You even used it in your example.

                                                            1. 2

                                                              Of course those two are not equally imprecise. One is an exact value. The other one isn’t even representable as a float. Your computer will store a value as close to that as it can find. But not that value. It’s not about your program sleeping for exactly 3 milisecs. I agree that in a non real time operative systems, all such bets are off. It’s about basic arithmetic exactness. Each time you make an operation on a float, an approximation is introduced, so you loose the ability to do equality checks.

                                                              The fact that one is an exact value and the other isn’t doesn’t have any impact on the actual phenomenon you’re considering here (sleep time). The computer will not sleep for exactly 3 milliseconds if you do sleep_ms(3), even though 3 represents, with infinite precision, the duration that you want. Worse, if you do this:

                                                              int a = 3;
                                                              int b = 3;
                                                              sleep_ms(a);
                                                              sleep_ms(b);
                                                              

                                                              the two sleep durations won’t be equal, even though a and b are equal, making equality checks arithmetically exact, but physically inaccurate.

                                                              That’s what I mean by “equally imprecise”. Arithmetically, 3 (quantified in ms) is obviously more precise than 0.003 (float, quantified in s) but I doubt the former results in sleep times that are reliably closer to 3 ms than the latter.

                                                              How is sleep_seconds() not a clear solution? You even used it in your example.

                                                              It’s a clear solution to one side of the problem – annotating what that value means. However, it’s not transitive: it’s still possible to read a value in sec and pass it to a function that expects seconds, and the system won’t bat an eye.

                                                              The impact of labelling methods, rather than quantities, in the absence of an effective type system, is IMHO severely overestimated. Labelling functions with units is one of the standard panacea in embedded systems ever since the Ariane 5 kerfuffle in the nineties, if not before, and in my experience it has a negligible impact on bug count. It’s very rarely the case that a value is defined or read in the same place where it’s used. Most of the time, the usage of a symbol is physically far removed from its definition, and covers several implementation layers. E.g. you rarely say sleep_ms(50), it’s usually sleep_ms(config.request_timeout_ms) where config.request_timeout_ms is read from a config file, which is generated from a settings dialog from a text input (that’s sometimes placed next to a drop-down box that says “microseconds”, “milliseconds” and so on), hopefully based on a spec and so on. Unless everything in that chain has a ms in its name that’s observed everywhere, and conversions are reliably performed everywhere, it’s gonna break. Anything not enforced by the compiler breaks eventually, and the longer this chain, the more likely something will break.

                                                              I think you will agree with me that, as science gets more advanced, its mathematical tools and foundations, the metric system in this case, need to follow accordingly in terms of mathematical correctness. In order to be useful in more advanced models. This happened last year. Precisely, the metric system was redifined to be built up on known universal constants, rather than things like a chunk of metal stored somewhere. Which was the best we could get 120 years ago or so to fit our practical needs as you well describe.

                                                              I’m sorry, I don’t know how to be nicer about this, but the first part is half true, at best, and the second part is pretty much completely wrong.

                                                              First of all, “the metric system” was not recently redefined in terms of universal constants (I guess you’re referring to the redefinitions in 2019? I’m not aware of more recent binding changes but the pandemic was crazy…) Except for the kilogram, and a handful of its derived (but practically critical) units, including the ampere and the mol, they’ve all been de-materialized and expressed in terms of universal constants for 50 years or more. Changing the definition of the kilogram meant some other units had to be re-defined (some for inherent reasons, some just for practical reasons) and while it was the biggest shift since the 1970s, it was certainly not a super radical one.

                                                              Journalists made it a far bigger deal of that than it really (and, I mean, it was a big deal, but it was really just a small part of the metric system that got revised, although it was long overdue) was because the term “fundamental constant” has a nice sound to it. But this involved a great deal of bureaucracy and far less scientific re-adjustment than it sounds, as it literally involved making some constants fundamental “by decree” (IIRC the elementary charge, the Boltzmann constant, and the Avogadro constant, had their status changed from experimentally-determined to defining units) and freezing their values, effectively decoupling them from their underlying physical meaning.

                                                              It also didn’t make the foundation of the international system of units any more solid. The value of the “standard kilogram” is still going to change in the future. It’s now defined in terms of Planck’s constant, but all that means is that, in the future, mass metrology experiments will not refine the value of Planck’s constant, but the value of the kilogram.

                                                              Second, it’s hard to gauge intent, of course, but a big driving force, if not the primary driving force behind these efforts to de-materialize unit standards, was to overcome the practical problems of maintaining physical measurement standards. The fact that physical standards were unsuitable for long-term use has been well understood ever since the first meetings of the CGPM in the 1870s. Switching to non-material standards based on physical constants has literally been a century-long effort, the first major attempts were in the 1920s.

                                                              The value proposition of the new definition for the kilogram is primarily of practical nature – it didn’t change anything in the underlying mathematical models. The big problem of the International Prototype for Kilogram (IPK) wasn’t its mass drift nor the fact that you couldn’t be sure if the hunk of metal was really an accurate enough description of a kilogram. The big problem was that getting from the 1 kilogram prototype to the 1 mg (and less!) standard references needed in the pharmaceutical industry, material science and many others, required a long sequence of comparisons and thus had enormous uncertainty. It artificially limited the resolution of mass metrology determinations, not because we couldn’t physically measure mass increments lower than a few micrograms (we could!) but because it was impossible to get two devices to agree on how big one of those increments was in a way that was legally traceable to that stupid hunk of junk in Paris.

                                                              Efforts to redefine it did not explicitly avoid using a man-made object – in fact, for many years, the best-looking candidate still used one, and some of the alternatives that were proposed effectively amounted to reliably fabricating an IPK on demand. But they did seek to tie it to values that could be fixed by convention (so, fundamental constants) so that you could effectively discount mass drift when ascertaining uncertainty and, thus, easily re-adjust existing datasets, and more importantly, they sought to enable you to gauge 1 mg against a “standard milligram” with about as much certainty as you could gauge 1 kg against the standard kilogram. Prior to 2019, that was impossible. You could get 1 kg references certain to fractions of a millionth, but getting 1 mg references with uncertainty values below 0.1% was really expensive, and it got well into “organs on black market” territory past a few parts in one thousands.

                                                              The math otherwise works out just as well regardless of how you measure things – I mean, classical mechanics predates even the first international material definitions of most units by more than a century, and classical electromagnetism has been developed pretty much along with the measurement system, and neither had to be revised because of how their units were defined. Barring some statistical models that explicitly take into account the uncertainty in measuring some fundamental constants, which aren’t really used outside the field of metrology, I don’t think any model had to be substantially revised because of a change in a unit’s definition. Historical timelines sort of come close but even there it’s calibration curves that keep changing, not the physical definition of units.

                                                              Which units are defined in terms of other units, and which constants are fundamental and which aren’t, is not just purely conventional, but also driven by practical constraints at least as much as scientific purity (in fact, for reasons that are a whole other can of worms, the number of fundamental and derived units is actually fixed: if you want to make a derived unit fundamental, you either have to make an existing fundamental unit derived, or you have to come up with new physics). For example, the current definition of the metre effectively makes it defined in terms of seconds. Defining length in terms of time raised a few eyebrows but it was accepted by the meteorologic community because time intervals are one of the things we can measure with extraordinary precision.

                                                              That’s actually why we settled for the current definition of the kilogram, too. Several potential definitions were proposed after the CIPM finally recommended de-materializing the kilogram back in 2005. This one won primarily because the Kibble balance, the device used to measure Planck’s constant, was the only one that could be practically realised with good results. The current definition of the kilogram was accepted with some controversy because – from a scientific and mathematical point of view – it really sucks: it’s literally defined by fixing Planck’s constant to a conventional value, which also got some people mad becaust tl;dr you can’t just wish how much energy a photon has, and also means that you end up defining a quantity of macroscopic systems in terms of qualities of systems that exhibit quantum behaviour. But all other definitions relied on devices that were either impractical (watt balances) or required technology with no better guarantees than the IPK (the Avogadro project). This one made the physicists unhappy and it’s a really complicated devices, that requires super-accurate gravity determinations. But it ticked the “easy to gauge small quantities” box and “we can make it in a lab” box, so the metrology nerds were happy with it.

                                                      2. 2

                                                        D language: Thread.sleep(100.seconds);

                                                        1. 2

                                                          Thread.sleep( var);

                                                          1. 2

                                                            Sure, but units attach to values (via types), not to arguments. For instance, we don’t printf("Hello World": char*) either.

                                                            Ie. you gather the unit at the point where you convert from a number to var: Duration var = 10.seconds;

                                                            C# sort of does this though: Print("Hello World", out buffer);

                                                        2. 2

                                                          Oh man, yes please.

                                                            1. 1

                                                              Tangentially related: include time zones when displaying dates. (storing timestamps with time zone goes without saying)

                                                              1. 2

                                                                This is why I like epoch time, when it makes sense. No timezones, and easy to serialize :-)

                                                              2. 1

                                                                I don’t really like Dart, but it does some things right:

                                                                sleep(const Duration(seconds: 1));