1. 20
  1.  

  2. 2

    I remember reading about this back when the Kickstarter campaign was going on, and thinking that it was an impressive effort. I’m all for leaner, more specialized interpreters like this! The original kickstarter page had some highlights:

    Micro Python has the following features:

    • Full implementation of the Python 3 grammar (but not yet all of Python’s standard libraries).
    • Implements a lexer, parser, compiler, virtual machine and runtime.
    • Can execute files, and also has a command line interface (a read-evaluate-print-loop, or REPL).
    • Python code is compiled to a compressed byte code that runs on the built-in virtual machine.
    • Memory usage is minimised by storing objects in efficient ways. Integers that fit in 31-bits do not allocate an object on the heap, and so require memory only on the stack.
    • Using Python decorators, functions can be optionally compiled to native machine code, which takes more memory but runs around 2 times faster than byte code. Such functions still implement the complete Python language.
    • A function can also be optionally compiled to use native machine integers as numbers, instead of Python objects.
    • Such code runs at close to the speed of an equivalent C function, and can still be called from Python, and can still call Python. These functions can be used to perform time-critical procedures, such as interrupts.
    • An implementation of inline assembler allows complete access to the underlying machine. Inline assembler functions can be called from Python as though they were a normal function.
    • Memory is managed using a simple and fast mark-sweep garbage collector. It takes less than 4ms to perform a full collection. A lot of functions can be written to use no heap memory at all and therefore require no garbage collection.
    1. 2

      For anyone thinking that this might be competitive with the CPython interpreter in terms of speed. I haven’t looked into any micropython optimizations.

      (py3.env) volcano:unix anima$ /usr/bin/time -l Python Ray.py --run 3 128
              1.54 real         1.53 user         0.01 sys
         6758400  maximum resident set size
               0  average shared memory size
               0  average unshared data size
               0  average unshared stack size
            2049  page reclaims
               0  page faults
               0  swaps
               0  block input operations
              20  block output operations
               0  messages sent
               0  messages received
               0  signals received
               0  voluntary context switches
              11  involuntary context switches
      (py3.env) volcano:unix anima$ /usr/bin/time -l ./micropython Ray.py --run 3 128
             19.09 real        19.03 user         0.04 sys
         1032192  maximum resident set size
               0  average shared memory size
               0  average unshared data size
               0  average unshared stack size
             270  page reclaims
               0  page faults
               0  swaps
               0  block input operations
               4  block output operations
               0  messages sent
               0  messages received
               0  signals received
               0  voluntary context switches
              42  involuntary context switches
      

      It would be interesting to use the shedskin examples as a performance baseline against Python3.

      1. 2

        This makes sense as they are optimizing for small memory footprint and not raw speed. Also, it would be interesting to see how Micropython running bare metal on an ARM chip compares to CPython running on top of Linux on the same hardware.

        1. 1

          Totally agree. I have eLua running on an STM32F4DISCOVERY (there are newer more featureful versions of that board), when I have time next week I will play around with getting it running. I should run micropython against historic versions of the Ray.py raytracer as I could be triggering some deoptimization. Again the shedskin examples will probably highlight almost instantly the parts of micropython that could use some performance improvements better than a single benchmark.