Skip to main content

Good news everyone!

A quick update from the JIT front. As of yesterday, we're now able to translate a highly-experimental Python interpreter that contains JIT. It mostly crashes immediately, mostly due to some unsupported operations in the assembler backend, but for a carefully crafted program, we're able to get massive speedups. For something as complex as:

  i = 0
  while i < 10000000:
   i = i + 1

our JIT is about 20x faster than CPython. That's still about 3x slower than Psyco, but looking at assembler code it's obvious that we can speed it up a lot. These are very good news, since we don't encode python semantics at all in the JIT. The JIT is automatically generated from the Python interpreter source code. This means we should be able to expand it to handle more complex python programs relatively quickly (interested assembler experts needed!).

This is actually the fifth incarnation of JIT that happened over the last two years. It's by far simpler and more promising than any of the previous approaches. Expect more details soon!

Cheers,
fijal

Comments

Anonymous wrote on 2009-03-10 16:14:

Very exciting news indeed.
Congratulations!

Zemantic dreams wrote on 2009-03-10 16:49:

This is exciting. The world is waiting.

(I am still sad that Psyco development was discontinued and never ported to 64bit)

nekto0n wrote on 2009-03-10 17:34:

Great news!
Activity in blog shows that project is full of enthusiasm.

Anonymous wrote on 2009-03-10 18:08:

wow, that's really great =)

Eric van Riet Paap wrote on 2009-03-10 18:33:

Congratulations! Very nice to read about these milestones.

I did not follow llvm development but does anyone know if they made some arrangements by now that would enable the PyPy JIT generator to leverage their optimizers?

Harold Fowler wrote on 2009-03-11 12:16:

Wow, you are right, that is good news.

RT
www.privacy.at.tc

Anonymous wrote on 2009-03-11 16:57:

I'm wondering why something like this would be faster than CPython? New to the whole python scene so I'm really just curios.

René Dudfield wrote on 2009-03-11 19:59:

nice one :)

In the mean time... I wrote an optimized version of that program for CPython:

i = 10000000

CPython is 10000000x faster than the pypy jit!!!!!!

Tim Wintle wrote on 2009-03-12 02:48:

Congratulations!

This is very exciting.

@Anonymous - it's because the standard python interpreter doesn't use a JIT, which makes dynamic languages quite slow.

Anonymous wrote on 2009-04-07 15:42:

I'm waiting for production solution!