1. 14
  1.  

  2. 4

    As a numeric tower geek, I think one of the more interesting things Kawa did was add support for units to the numeric tower. It also supports quaternions which sounds freaking awesome, but I have no idea if it’s at all practically useful :)

    1. 1

      As a numeric tower geek …

      May I ask a question regarding this?

      I wonder if there is anything of interest in the numeric tower left for someone (me) who thinks that neither implicit conversions between different number types nor having number types in an is-a relationship is a good thing?

      1. 1

        What do you mean by “not having number types”?

        1. 1

          I meant “number types in an is-a relationship”. Seems to be way too leaky to make any potential benefits worthwhile …

          1. 1

            Most dynamic languages will typically have some way to introspect the type or class of an object, so you can’t really avoid the “is a” relationship. Or are you referring to the hierarchy as such?

            I’m not sure your numerics will be part of a “tower” as such if you drop the hierarchy. I think the Scheme/LISP numeric tower is inspired by mathematics, which defines certain types of numbers as subtypes of other types. And then implicit coercion still makes sense; if you add 1 to 1/2, you’d get 3/2. Or if you add 1/2 to 1/2, you get 1, which is a different type from both inputs. That’s not required per se, of course; you could strictly separate integers from rational numbers with a divisor of 1 and have explicit conversion routines, but that sounds pretty clumsy in usage to me.

            There’s some sense in rejecting implicit coercion between exact and inexact numeric types, though, because the floating point representation is lossy while the other types aren’t. The author of S7 Scheme has some interesting ideas about exactness (search for “floats are approximations”). If I understand and remember it correctly, he argues that either a calculation or input that’s not exactly representable in IEEE floating point should be marked as inexact, while number input like “1.0” should still be regarded as exact. Maybe I’m mixing it up with another Scheme though, as I can’t really find the description I thought I remembered.

            1. 1

              I’m not seeing the hierarchy as useful, because the ideal notion of numbers in math does not carry over to computers.

              That’s why in my opinion it makes more sense to keep number types cleanly separated – pretty much any number types have different, incompatible diversions from the ideal way, and converting them silently into each other feels incredibly dangerous.

              It’s not that e. g. converting ints to floats may be lossy – it’s that the behavior of overflow, division-by-zero etc. changes completely.

              Based on my experience fixing existing languages and building new ones, implicit conversions between number types is something on my definitely-not list. Thankfully, most newer languages also seem to adopt this stance.

              1. 1

                It’s not that e. g. converting ints to floats may be lossy – it’s that the behavior of overflow, division-by-zero etc. changes completely.

                That makes a lot of sense, but this only really holds for conversions between exact and inexact numbers (so floats and non-floats).

                However, performance-wise, even just having automatic conversion between integers and rational numbers (exact fractions) can be a huge pain; there’s a lot of type dispatching code that “generic” operators require, so from that perspective I could also agree that these automatic conversions suck.

                1. 1

                  conversions between exact and inexact numbers (so floats and non-floats).

                  Fixed-size integers are inexact, too. (Compare Int.MaxValue + 1 with BigInt(Int.MaxValue) + 1 – the results differ by 2³²!)

                  there’s a lot of type dispatching code that “generic” operators require, so from that perspective I could also agree that these automatic conversions suck

                  Agreed!

                  1. 1

                    Fixed-size integers are inexact, too. (Compare Int.MaxValue + 1 with BigInt(Int.MaxValue) + 1 – the results differ by 2³²!)

                    That’s right, but in this case I believe that implicit conversion between int and bigint is a good thing. In fact, it was the entire motivation why I wanted to add full numeric support to CHICKEN; not having bignums was just too much of a pain. There are few downsides (the only I can think of is performance) and a whole host of upsides.

                    1. 1

                      But you could have bignums without implicitly converting things to it? :-)

                      1. 1

                        You could, and some languages even do it that way. I find that utterly disgusting and clumsy :)

      2. 1

        There’s no end of applications of quaternions! From computer graphics to rotating things to relativity to crystallography.

      3. 1

        For an other interesting language that runs on JVM, which I successfully used to write a tiny Android app for myself several years ago, see: https://mth.github.io/yeti/