I just installed a quad-core CPU and was curious how much time it could shave off configuring, building and testing the Perl core. This is particularly useful to me when I'm applying patches submitted to perl5-porters. After reviewing a patch, I want to make sure that it doesn't break anything before I commit it and push it to the master repository.
I already had a small utility program that I use for building perl from the git source, so I adapted it to let me select how many processes to use. It more or less does the following for any particular N:
$ git clean -dxf (copy in previous config.sh and Policy.sh) $ Configure -ders -Dusedevel ( ... plus other stuff ... ) $ TEST_JOBS=N make -j N test_harness
Then I used the time program to time the script for different numbers of processes. Here is the result:
Despite the usual advice that parallel builds should use processes equal to the number of CPUs (or cores) plus one, adding more processes still squeezes out a little more speed -- though only around 10 seconds towards the end. That suggests to me that there is still a lot of IO-bound work, which makes sense given the number of individual test files in the Perl core.
As a technical note, the tests all use a cached copy of the config.sh and Policy.sh files to speed up Configure. I also use ccache to speed up compilation. All of the test runs shown in the graph were done after doing a full configure/build/test cycle, so the cache was "pre-loaded".