Compiling effortlessly … sort of

It took me a while, but I finally ironed out my upgrade from kernel to 2.6.34 this morning, on my Pentium machine. Ordinarily I don’t wait so long to make a jump, but things were going very well with, and since there is rarely any good reason to shift up, I let it stagnate for a while.

But fear of obsolescence is a powerful thing, and realizing I had a kernel that dated back the better part of a year made me a little queasy. I know in the back of my mind that a 14-year-old machine has little to benefit between a kernel written in September 2009 and one written a few weeks ago, but it seemed worth the effort.

Not that it was a huge effort though. Usually I roll configuration between kernels with make oldconfig, but this time I started from a clean page, and pruned out all the unnecessary parts. It took me a little while to fine-tune the framebuffer and a network parts, and the sound was the last thing I needed to fix. And now it’s done.

The odd part of the entire experience, and the reason why I mention it here, is that while the machine is slowly and faithfully correcting the modules I set, there’s no slowdown or lag or performance hit. That’s strange to me because on other hardware, for example my long-running Inspiron, compiling or building a kernel more or less precluded using the machine outside of very trivial tasks.

But this Pentium barely notices. Memory use peaks around 22-29Mb (alongside all the other software I normally run) and the CPU is pegged at 100 percent of course, but I can still type at normal speed, switch windows in screen or ttys at the console, manage remote systems with ssh, etc., etc., and not notice any stutter or lapse.

I wonder why that is?

Of course, this is all moot point because it still takes 20 minutes to compile a single sound module, and most of a day and night to build an entire kernel. Praising it for not lagging while it meanders through the chore of building new software is like praising a snail for traveling in a straight line for a day. You’re still frighteningly slow, and hardly covered any distance.

But it does mean that troubleshooting is a little easier, even if it takes longer. I can wait 20 minutes for a module to build, see if it works, and then go back to what I was doing without waiting or needing to switch machines.

And all that being said, if there are a large number of packages to update or if there is a particularly large program to build (such as gcc), I yank the drive and connect it over USB to the fast computer. That’s why I bought the fast one, and I’m not such a glutton for punishment that I have to build software for days and days at 120Mhz. There are limits to my fanaticism. :twisted:

About these ads

9 thoughts on “Compiling effortlessly … sort of

  1. Reacocard

    There was a lot of work done on the CPU scheduler over that interval – 2.6.32 in particular ( The increase in responsiveness is indeed quite noticeable, even on my overpowered 2.5ghz core 2 duo. Software that gets faster as it ages is one of my favorite things about open-source. :)

    1. K.Mandla Post author

      I’m betting that’s the answer then. Before now I rarely did any compiling on this old machine because it seemed too taxing. Now, it seems to be able of doing both without suffering so much. I am likewise thankful for that improvement. :D

  2. cthulhu

    I’m no wizard at networks and stuff, but it was my understanding that you could use distcc to distribute the compiling process on the other machines, thereby making it a lot faster.

    1. Luca

      You can however it isn’t exactly effortless. You need to ensure the machines in your compile farm all have the same toolchain (note: architecture) and libraries installed. Yanking the drive is definitely the easier of the two.

      1. cthulhu

        This is what it says on the distcc web-page:

        “distcc does not require all machines to share a filesystem, have synchronized clocks, or to have the same libraries or header files installed. They can even have different processors or operating systems, if cross-compilers are installed.”

        Not sure exactly what it means, but it sounds good.

  3. Michele Amato

    Hi! Can you share your dot-config file for kernel 2.6.34, please? Thanks in advance and scuse me for my very bad english…

  4. JP Senior

    The completely fair scheduler is awesome; I do heavy visualization labs at home on qemu and virtualbox. Often my CPU is pegged at 100% on all four cores. The new scheduler is incredible when it comes to ‘user’ components with something just as simple as my mouse. IBM has a great report on CFQ at I’ll admit

    With regards to compiling, most makefiles I’ve come across in the wild don’t attempt to use your neat-o multicore or hyperthreaded systems at all. Try make -j3 to run 3 jobs on a 4 processor system – huge difference. I usually leave one “cpu” or “job” available so I don’t end up pinning each of my CPUs, allowing me to do normal work while I wait for the compile to finish.

    Here’s a comparison of compile times for pulseaudio 0.9.19.
    w/o -j3

    sudo time make
    95.67user 42.58system 2:20.17elapsed 98%CPU (0avgtext+0avgdata 222960maxresident)k
    62192inputs+70096outputs (28major+13364466minor)pagefaults 0swaps

    w -j4
    sudo time make -j4
    96.60user 41.82system 0:38.79elapsed 356%CPU (0avgtext+0avgdata 222528maxresident)k
    40inputs+66872outputs (0major+13299085minor)pagefaults 0swaps

    2:20 vs 0:38! Incredible!


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s