1. 12

Our results show that while polymorphism is used in all programs, the programs are to a great extent monomorphic. The polymorphism found is evenly distributed across librar- ies and program-specific code and occur both during program start-up and normal execution. Most programs contain a few “megamorphic” call-sites where receiver types vary widely. The non-monomorphic parts of the programs can to some extent be typed with nominal or structural types, but none of the approaches can type entire programs.

  1.  

  2. 4

    In “threats to validity” section, the paper discusses program selection. As a Python developer, selected programs do not seem representative at all. Programs were selected from SourceForge based on download count, selecting lots of end user GUI programs written in Python. Wouldn’t it be better to use, say, PyPI download count? As a result, selected programs include none of: Django, Flask, SQLAlchemy, boto, etc.

    1. 3

      I’ve been thinking about this sort of thing in Ruby - the language makes a lot of tradeoffs for dynamic, polymorphic everything that is little-used in practice.

      1. 6

        Three years ago I attempted to introduce such an optimisation in MRI, the canonical Ruby VM. I looked for monomorphic call sites and had them compiled to custom opcodes invoking statically determined methods. I hoped to introduce some savings by removing expensive method lookups dominating MRI’s profiling graphs.

        It all looked promising in theory, but in practice I didn’t achieve a lot, as documented in my MSc thesis. Some synthetic benchmarks showed promise, but more real world scenarios—e.g. simple Sinatra or Rails apps under load—weren’t affected by my changes. MRI’s built-in inline method caches were already doing a very good job. Moreover, I’m happy to see that people are still working on making them more efficient.

        1. 1

          Great research! I would never have expected that practical result, that’s really interesting.

          1. 1

            I can’t seem to find a PDF of this. I’m super interested, as my day job is working on JIT technology, currently being proven on Ruby, and we’re always on the lookout for reading material.

          2. 2

            I don’t remember which story it was, but there was one posted here month or so back about trying to optimize JavaScript by removing type checks. The observation the author made was that the type of something is almost always static its entire life, only rarely is the dynamicism actually used. This makes a lot of sense if you consider the human elemant of programming: if everything was dynamic all the time it would be impossible for any programmer to keep track of what is going on.

            My opinion is that many people default to dynamically typed and move to static types over time but we should probably have languages which make it easier to default to static types and provide a dynamic escape hatch. While I have not used it, I think C# offers this with it’s dynamic type.

            1. 9

              This made me finally learn about two Haskell features that I’d heard about. Haskell has (at least) two different mechanisms for doing dynamic sorts of things. The first use case is when you have incorrect code in your program, but you’d like to compile it anyway. You know that you aren’t actually going to use that part of the code and so it shouldn’t matter. In that case, use the compiler flag -fdefer-type-errors. This gets you pretty much what you have in a dynamic language. Blatant type errors are only a problem if you actually try to run that code:

              aString :: String
              aString = 42
              
              main :: IO ()
              main = putStrLn "Hello world!"
              

              This will compile with just a warning. The second case is when you want explicit dynamic types. Here you use the Dynamic type from the standard library:

              vals :: [Dynamic]
              vals = [ toDyn 1
                     , toDyn "cat"
                     , toDyn 3.14159
                     ]
              
              useString :: [Dynamic] -> String
              useString list =
                let str :: String
                    str = fromDyn (list !! 1) ""
                 in map toUpper str
              

              And this behaves like this:

              > useString vals
              "CAT"
              

              All-in-all, since the bulk of code is (monomorphically) typed, defaulting to static types, with dynamic escape hatches like the above sounds nice!

              1. 5

                It’s always interesting to me that Haskell has such a nice dynamic type which sees almost no use.

              2. 3

                I wonder if you’re thinking of Simple and Effective Type Check Removal through Lazy Basic Block Versioning – Maxime Chevalier-Boisvert, Marc Feeley, ECOOP 2015.

                Even if you weren’t, it’s a good read; Interesting technique for exploiting this observation in a novel JIT compilation strategy.

                Edit: Updated link, which didn’t have PDF.

            2. 3

              Because of the high degree of monomorphism, most pro- grams can be typed to a large extent using a very simple type systems. Our findings show that the receiver in 97.4% of all call-sites in the average program can be described by a single static type using a conservative nominal type system using single inheritance. If we add parametric polymorphism to the type system, we increase the typeability to 97.9% of all call-sites for the average program.

              And yet it’s really helpful to have dynamic typing for that last ~2%. I read this paper as having strong support for gradual typing in dynamic languages.

              1. 4

                I think it’s the opposite, actually. Gradual typing implies the default state being dynamic and slowly making it typeful. This tells me the default state should be typeful and provide an out for doing dynamic things on the rare occasions one needs.

                Note, that this is about examining dynamic code and understanding the morphisms, however that doesn’t mean the 2% is needed, just that this code happens to be polymorphic. Perhaps if you had a good type system you wouldn’t need that 2% at all.

                1. 1

                  And yet it’s really helpful to have dynamic typing for that last ~2%

                  What sorts of problems are those? I haven’t really encountered those.

                  (I don’t consider “I don’t want to specify types” a problem :) )

                  1. 3

                    I encountered this in Haskell, not in Python, but… if you’re interacting with a platform API that’s the only way to do a certain thing, and that API wants you to use a void pointer as your state-passing mechanism for callbacks you give it… then you have a number of implementation strategies, but they all come down to wrapping your statically-typed values in a way that hides their type, and reconstructing it later.

                    Of course, idealists such as myself would love for platform APIs to stop doing that, but that isn’t easy. The alternative to context pointers alongside callbacks is closures. Languages differ a lot on how closures are implemented. The traditional strategy to get something like a closure that works with C-centric platform ABIs, and therefore with libraries written in other languages, is to construct a trampoline function at runtime, which unpacks the captured state and calls the real implementation. But that is intentionally not possible on platforms that are sufficiently locked-down to have the “write xor execute” policy on memory pages.

                    1. 2

                      Totally forgot about this!

                      For interop, seems like all langs eventually need a way to do the void* dance. But, yeah, it’s super-icky!