Skip to main content

PyPy 1.4: Ouroboros in practice

We're pleased to announce the 1.4 release of PyPy. This is a major breakthrough in our long journey, as PyPy 1.4 is the first PyPy release that can translate itself faster than CPython. Starting today, we are using PyPy more for our every-day development. So may you :) You can download it here:

https://pypy.org/download.html

What is PyPy

PyPy is a very compliant Python interpreter, almost a drop-in replacement for CPython. It is fast (pypy 1.4 and cpython 2.6 comparison).

New Features

Among its new features, this release includes numerous performance improvements (which made fast self-hosting possible), a 64-bit JIT backend, as well as serious stabilization. As of now, we can consider the 32-bit and 64-bit linux versions of PyPy stable enough to run in production.

Numerous speed achievements are described on our blog. Normalized speed charts comparing pypy 1.4 and pypy 1.3 as well as pypy 1.4 and cpython 2.6 are available on the benchmark website. For the impatient: yes, we got a lot faster!

More highlights

  • PyPy's built-in Just-in-Time compiler is fully transparent and automatically generated; it now also has very reasonable memory requirements. The total memory used by a very complex and long-running process (translating PyPy itself) is within 1.5x to at most 2x the memory needed by CPython, for a speed-up of 2x.
  • More compact instances. All instances are as compact as if they had __slots__. This can give programs a big gain in memory. (In the example of translation above, we already have carefully placed __slots__, so there is no extra win.)
  • Virtualenv support: now PyPy is fully compatible with virtualenv: note that to use it, you need a recent version of virtualenv (>= 1.5).
  • Faster (and JITted) regular expressions - huge boost in speeding up the re module.
  • Other speed improvements, like JITted calls to functions like map().

Cheers,
Carl Friedrich Bolz, Antonio Cuni, Maciej Fijalkowski, Amaury Forgeot d'Arc, Armin Rigo and the PyPy team

Comments

ipc wrote on 2010-11-26 18:42:

congratulations!

why wrote on 2010-11-26 18:47:

This is unacceptable. Christmas is not until next month!!!

Tim Parkin wrote on 2010-11-26 19:09:

Massive congratulations - exciting!

Unknown wrote on 2010-11-26 19:18:

Sweet! Keep up the great work !

Anonymous wrote on 2010-11-26 19:41:

Woohoo!!

Martijn Faassen wrote on 2010-11-26 20:07:

Awesome!

Anonymous wrote on 2010-11-26 20:59:

Hip hip hooooraaaay!!!!

ipc wrote on 2010-11-26 22:51:

all I want for Christmas is stackless support in a 64-bit pypy-c-jit :) 'two greenlets switching and a partridge in a pear tree!'

Unknown wrote on 2010-11-26 23:14:

Congratulations. I hope the PPA is going to be updated soon. Too lazy to build it myself, right now. (:

Paul Boddie wrote on 2010-11-26 23:29:

Is there a -j <number-of-cores> option for the translation process? It's a bit unfortunate that 15 cores on the nice machine I'm using can't be put to use making it translate faster. (Or unfortunate that I didn't read the documentation, maybe.)

ipc wrote on 2010-11-26 23:54:

--make-jobs=N only some parts of the translation process is parallel.

Anonymous wrote on 2010-11-27 00:10:

Eta until numpy scipy?

Paul Boddie wrote on 2010-11-27 01:00:

The report of 2.4GB usage on x86-64 is accurate, but it took about 7800s on a 2.33GHz Xeon. Next time I'll try and exercise some of the other cores, though.

Anonymous wrote on 2010-11-27 04:54:

so pypy on average is now about 2x faster than cpython?

and unladen swallows goal was beeing 5x faster? was that totally unrealistic?

Leonard Ritter wrote on 2010-11-27 10:59:

You are my heroes!

Symbol wrote on 2010-11-27 11:37:

Just Awesome!!!

KUTGW!

Daivd wrote on 2010-11-27 12:02:

Does this release include the -free branch that was mentioned in the previous post? The 2x memory requirements lead me to believe so.

Maciej Fijalkowski wrote on 2010-11-27 13:45:

@Daivd
yes, it does

@Anonymous
5x improvement is not a well defined goal, however it's a good marketing thing. PyPy is 2x faster on translation, 60x faster on some benchmarks while slower on other. What does it mean to be 5x faster?

Christian S. Perone wrote on 2010-11-27 14:23:

Sounds great, great work, great thanks !

scientist wrote on 2010-11-27 14:34:

Do you know why the purely numerical benchmarks nbody and spectral-norm are still so much slower in PyPy compared to e.g. LuaJIT?

tobami wrote on 2010-11-27 14:44:

This is awesome. PyPy 1.4 addresses the 2 slowest benchmarks, slowspitfire and spambayes. There is no benchmark anymore where PyPy is much slower than CPython.

To me, this marks the first time you can say that PyPy is ready for general "consumption". Congratulations!

PS: The best comparison to appreciate how much of an improvement 1.4 has been is:
https://speed.pypy.org/comparison/?exe=2%2B35,1%2B41,1%2B172&ben=1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20&env=1&hor=false&bas=2%2B35&chart=normal+bars

Maciej Fijalkowski wrote on 2010-11-27 17:37:

@scientist

Sure, because LuaJIT is crazy when it comes to optimizations :-) We'll get there eventually, but purely numerical stuff is not as high on our list as other things.

Luis wrote on 2010-11-27 18:37:

@maciej: in an old thread (have tracing compilers won?) you replied to Mike Pall saying that pypy was in a way middle ground, that it didn't offer as much opportunities for micro optimizations as luajit.

You were discussing about keeping high level constructions from the user program to perform more tricks.

Has the situation changed?
Do you really think now that you'll get there?

Anyway, let me tell you that you are all already my super heroes :-)

Maciej Fijalkowski wrote on 2010-11-27 18:46:

Heh, I don't remember that :-)

Anyway, LuaJIT has more options for microoptimziations simply because Lua is a simpler language. That doesn't actually make it impossible for PyPy, it simply make it harder and taking more time (but it's still possible). I still think we can get (but predicting future is hard) where LuaJIT is right now, but racing Mike would be a challenge that we might loose ;-)

That said, even in simple loops there are obvious optimizations to be performed, so we're far from being done. We're going there, but it's taking time ;-)

Victor wrote on 2010-11-27 19:33:

Congrats to all PyPy developers for making huge contributions to Python performance, JIT and implementation research and delivering an end product that will help many developers to get more done.

IIUC, we still have ARM, jit-unroll-loops, more memory improvements, Python 2.7 (Fast Forward branch) and a bunch of other cool improvements in the works, besides some known interesting targets that will eventually be tackled (e.g. JITted stackless).

I wish more big Python apps and developers would play with PyPy and report the results.

Cheers!

P.S.: Fijal: see https://lambda-the-ultimate.org/node/3851#comment-57715

Michal M. wrote on 2010-11-29 18:55:

Congratulations.
However, you suggest people used it in production environment - please, give us version compatible at least with CPython 2.6.
I hope that you plan it but at first you wanted to have stable and fast base. :)

Amaury Forgeot d'Arc wrote on 2010-12-01 22:21:

@Michal:
There is already an ongoing effort to port PyPy to Python 2.7.

But we need some help! It's a good way to become a PyPy developer.
And no, you don't have to be a JIT expert to implement itertools.combinations or asian codecs.

Anonymous wrote on 2011-02-09 00:18:

kudos to whip-smart guys for this wonderful piece of software.