A failure of logic

Here’s a legitimate question, and one you should consider: If your CPU is 20 times faster than hardware from a decade ago, why does it take the same amount of time — sometimes longer — to go from a cold start to online and reading e-mail?

In light of desktop advances over the past few months, and ones that are due over the next few more, it becomes more and more curious.

Unity, KDE 4.6, Gnome 3 … are they all improvements, if they’re requiring the same amount of time, but more powerful hardware?

If I rephrase the question, it becomes easily blurred: If you have thirty to forty times the memory your computer had ten years ago, why does it require a proportionate — or perhaps even greater portion — amount of resources to manage day-to-day tasks?

At this point, conventional wisdom says, “Well, it’s easy and cheap to bolster the amount of resources available to a computer, so the question is moot.”

Quite to the contrary. Jamming a PC to the brim with the fastest processor and biggest hard drive and most memory does not erase the fact that the software it runs is becoming less efficient.

Which is a point asserted by the original question.

Which, by the way, was not mine. It’s from a Linux Journal article.

From nearly ten years ago. 😯

It’s sad to think that, over the course of nearly a decade, the issue is still lurking. And oddly enough (or perhaps not), Marco Fioretti’s rationale for the RULE project is still lurking too.

But I’m just an end-user. I don’t code — I haven’t the time or the skills at this point in my life to make a meaningful and considerable contribution.

So perhaps pointing out the incongruity — the perversity, almost — of relying on stronger hardware to run heavier software to do the same tasks as a decade ago … well, maybe that’s rude.

But I am just an end-user, and that means I have the option of throwing my meager weight behind projects that don’t follow that trend.

It does not serve my interests to use or endorse software that needs additional hardware, not because of the financial implications, or ecological self-righteousness, or because of underprivileged communities elsewhere on the planet.

It’s because logically, at its core, the situation and popular prescription make no sense.

So no, I won’t be buying additional hardware to meet the demands of Unity or KDE 4.6 or Gnome 3, nor would I with Windows 7 or Mac OS … whatever Mac is up to these days.

When software and desktops follow a curve that suggests speed and efficiency over glitz and gluttony, I will be on board.

But until then, I am doing fine with a 15-year-old Pentium and a few razor-sharp console programs. To each his own.

43 thoughts on “A failure of logic

  1. paresse

    Simple. Lazy programmers.

    Steve Gibson is able to write short efficient assembly apps that just work.

    Most programmers these days can’t be bothered to take the time and write small tight efficient code

    And if they are so inclined, there’s a release manager telling them time is money.

    Reply
  2. Mike

    I much prefer the fluid graphics of today’s desktop environments to those of a decade ago. Also, we’re dealing with much larger data sets for things like HD video editing. Games are much better looking now than they were 10 years ago. We’ve nearly eliminated the HD bottleneck with SSDs and flash memory.

    I agree with you in regards to many simple desktop applications that could be much faster. While part of the blame can be placed on our inability to manage memory effectively as programmers, I think it mostly lies with modern languages themselves. A lot of focus is being placed on cross-platform development without optimization for specific systems. Also the interpreters and run time environments for these languages generally feel bulky and slow. It seems that both the programs and the containers that run them load too many excess libraries and eat up excessive amounts of system resources.

    It’s probably all part of making programming accessible to the masses. Anyone with a slightly technical mind can make a website and hack together a JavaScript library. Unfortunately, programming doesn’t require formal training, and so people don’t program efficiently.

    Good post, btw. Love the blog.

    Reply
  3. YouDoNotGetIt

    You don’t get it. At all.

    > If your CPU is 20 times faster than hardware from a decade ago, why does it take the same amount of time — sometimes longer — to go from a cold start to online and reading e-mail?

    Booting your PC is a disk intensive operation. It has nothing to do with your CPU. Disk read speed is roughly the same as it was a decade ago, and that’s why the performance of loading of programs hasn’t improved. Get an SSD.

    Unity and crew could be ten hundred trillion times more efficient and you wouldn’t notice a difference in booting your PC or loading programs. I am not kidding.

    You do not get it. At all.

    > If you have thirty to forty times the memory your computer had ten years ago, why does it require a proportionate — or perhaps even greater portion — amount of resources to manage day-to-day tasks?

    Your operating system doesn’t require thirty times the memory; it’s using thirty times the memory. Linux preloads libraries in memory for you to minimize the slow disk read I mentioned above. If you were boot into a modern OS with 512MB RAM it would still work, but since you have 4 gigs the OS makes use of them. Unused memory is wasted memory.

    Have you run out of memory in the past decade? Has anyone end user know run out of memory in the past decade? No? Come back and complain when you have.

    You do not get it. At all.

    Please, you think you know what you’re talking about. You don’t. You do not understand computer performance. Your opinion is a meme. Please stop.

    Reply
    1. Areader

      Well no, it’s you who don’t get it. Disk I/O is several times faster than a decade ago. The author is right– it is silly for PC hardware that is the equivalent of yesteryear’s super computers to be so slow for the same old mundane tasks.

      Reply
    2. Bryan

      So leave. Honestly, if you don’t agree with some of the tenets put forth in this blog – Stop. Reading. It.

      I don’t know about anyone else that has anything to do with this blog, but I’m drawn to the ideas proposed herein not because of the CPU or memory usage. I follow this blog because the philosophy of much of the software falls in line with my own. That philosophy is of ‘minimalism’ – of getting the most out of my software and hardware without over indulging or wasting anything. I don’t use Gnome/KDE/E17/Xfce or any the others because I don’t find the crap they do useful. I don’t use them all because I neither need nor want the flash and gloss they provide. They pragmatize and complicate an otherwise simple and clean system. Their internals are so large that I can’t understand them. DWM, on the other hand, I came to understand in one night. My point – dare I say *OUR* point – is that I can get the same things done with lighter weight and smaller software; that much of the ‘functionality’ added to software over the years simply inhibits basic usage of the software itself and that simpler, smaller software strips the crud off the top and leaves you with just what you need.

      You’re right – unused RAM is wasted RAM. That’s why I make the best use of it that I can – I dedicate it to filesystem caches, RAM drives that I compile in, and when needed, to program usage. I have 4GB of RAM installed on this computer and I’m using roughly 3GB of it – approximately 2GB of that is buffers though. Buffers that make my general computing experience more pleasant – files open quicker, programs open quicker and things feel snappier.

      And you’re also wrong. User’s have run out of memory in the past decade. *I* have run out of memory. The only thing that keeps many from noticing is swapping out to disk (which – let’s face it – is a last resort when someone has run out of memory).

      I can’t speak for anyone else. For me, it’s about control and efficiency. I have more control over my software and feel as if I can be more productive when the pieces are made of lightweight software. Call me stupid; call me a ‘poser. Call me whatever you want. You won’t change my mind and I’ll continue to do things this way. If you don’t like what you see, stop looking. There are plenty of other blogs on the internet.

      (and now I’ve ranted and hardly touched the point you were making in the

      Reply
    3. anon

      He misrepresents the case and you’re deliberately missing the point (or simply choose not to mention it). As you’ve said, using SSD has considerably helped my boot time and I don’t remember anything about the CPU (in fact my older laptop boots up faster than my newer desktop). But it’s not just the hardware that’s the problem, it’s also the programs and services that run at startup, and that’s the main point of his drivel, complaining about the software presented to us in the newer OSes. Now whether you love having these programs and services around, or find them necessary at all can be a subjective matter. I for one don’t miss anything when I switch to a minimalist distro and enjoy better boot time.

      Windows and Linux will gladly use all the RAM they can get to make the system fast and responsive (which is why we buy RAM in the first place). In reality they do not need 4 GB RAM, which is way too much for anything, this is certain. But denying that newer systems require more RAM and powerful hardware is just wrong. My experience with them on older laptops and PC is quite horrid : the system’s constantly using swap memory and every 2 minutes it throws a fit, temporarily freezing the UI. You ask about ‘running out of memory’ ? Funny you should ask that.

      Normally software that is more taxing on the system should be able to offer us better graphics, more responsiveness and convenience. But the question here is whether that’s the case.

      Reply
    4. Noyoudontgetit

      Booting your PC is a disk intensive operation. It has nothing to do with your CPU. Disk read speed is roughly the same as it was a decade ago, and that’s why the performance of loading of programs hasn’t improved. Get an SSD.

      Patently false. SATA disks are the same speed as ATA100?

      Even if this were true, it doesn’t explain why it takes longer.

      Unity and crew could be ten hundred trillion times more efficient and you wouldn’t notice a difference in booting your PC or loading programs. I am not kidding.

      Seriously? So if my e-mail client were more efficient and thus smaller, it would still take just as much time to load into faster memory from a faster hard drive with a faster processor?

      Have you run out of memory in the past decade? Has anyone end user know run out of memory in the past decade? No? Come back and complain when you have.

      Very well: I have. I have 4GB of RAM on my Linux system, and I’ve used it all up before and had the system grind to a halt when it had to start swapping.

      Reply
    5. Dann

      It’s not only the disk, it’s the disk FILESYSTEM.

      Just think.
      Most of our usb drives come with FAT32 by default. That is NOT a great filesystem in general, though when used on flash memory it will perform better than on an ATA drive.
      NTFS has been going strong for 10+ years with little improvement. Wonder why 7 takes as long to boot as XP?

      With the advent of ext4 for Linux/BSD-based OS’s, things have gotten much faster. 10 second boot times? Been done. Instant On? Also done. Running a server? You have Reiser, JFS and XFS if you don’t like EXT’s, or BTFS if you’re adventerous.

      Oh, and have you perhaps tried running on RAID? Raptors with 15000 rpm?
      Disk encryption will slow down boot times as well, there are many factors involved.
      A lot of older laptops used 4200rpm 2.5′ disks. Now we have 7200.

      That’s also forgetting functionality and OS bloat.
      Obviously this man has never used DSL or Puppy Linux on a cdrom, the slower readable medium.

      Reply
  4. yoshi314

    that’s because most software is growing in features, and gets bigger. and people tend to write software in higher level languages.

    for instance, you could build a kernel on 16MB box in 1996, but nowadays gcc compiler has so sophisticated methods of code analysis during build that it can take up to 200MB of ram building plain C code (during c++ compile it simply skyrockets).

    and even though it’s bigger and more bloated, more recent gcc will produce better quality build.

    also, just because the CPU is faster doesn’t mean that magically whole computer is faster. hard disks still work the same way they did years ago, there are still bottlenecks in pc architecture that affect boot – for instance bios is usually still written the way it was years ago (although using coreboot can make a huge difference).

    to each of his own. even though my machine can handle kde4 or gnome3, i am still fine with openbox

    Reply
  5. cherax

    “…the same tasks as a decade ago…”

    There’s your failure of logic – because the tasks are not the same.

    If you’re working with text only, then any modern computer is wild overkill; even a Kaypro CP/M machine would work just fine. But, ten years ago, I couldn’t edit the HD video or 12MP bitmap images, nor create, edit, and manipulate the 3D images, that I routinely work with today. And my old IBM X30 laptop even struggles with youtube videos (although it’s great for writing).

    If you’re not part of the graphics revolution, then there’s really no reason for you to use a modern computer; that’s true. But you dismiss powerful graphics capabilities simply because you’re not interested in them, preferring instead your white monospaced text on a black console background. Essentially, you’re saying that modern computers are of no value because K. Mandla has no need for them.

    Don’t get me wrong – I don’t like software bloat either; an update to the latest version of Keynote wanted to pump 750MB (!) onto my Macbook, and I just drew the line. A complete, packed-to-the-gills Linux distro weighs less than that, complete with OpenOffice (which is also obese). 750MB? WTF? But shitty, wasteful programming techniques don’t invalidate the whole enterprise.

    Reply
  6. Dennis Hodapp

    Do your research Mr. YouDoNotGetIt: http://www.computer-definition.com/access-time.php

    Disk access time for an HDD has almost halved in 10 years. Now admittedly it has not kept up with capacity, but yes, as you point out- SSDs are addressing that issue (with apparently extremely fast disk access time…makes me wish I’d bought my laptop a bit later).

    As K.Mandla regularly points out on his posts, he tries to reduce the number of libraries loaded into memory because as any programmer knows, it’s easy to write a program and borrow a few functions from a library for convenience and realize that the enormity of the unused functions is just a bog on memory when it would be way more memory efficient to just write the function yourself, or find a smaller library etc.

    You should also try not to be such a pretentious prick when you post comments. It doesn’t do any good.

    Reply
    1. Corky

      “As K.Mandla regularly points out on his posts, he tries to reduce the number of libraries loaded into memory because as any programmer knows, it’s easy to write a program and borrow a few functions from a library for convenience and realize that the enormity of the unused functions is just a bog on memory when it would be way more memory efficient to just write the function yourself, or find a smaller library etc.”

      Or use static rather than dynamic links. When you link to a static library, the resulting executable contains only the functions that you’re actually using (directly or indirectly).

      Shared libraries are often touted as memory savers, because the same library can be mapped into the address space of several different processes. The problem is that there is a lot of code in many of the so-called shared libraries out there that is rarely shared, if ever. But it gets loaded into memory whether it’s needed or not.

      Sometimes the typical user’s usage pattern is such that there is never more than one process running that uses the library, and it only uses a small fraction of the functions contained in the library. Doesn’t matter, the whole thing gets loaded.

      Even worse, the library often has dependencies on yet more libraries which must also be loaded into memory — again, whether or not the application actually uses any of the functions that lead to the dependency on these other libraries.

      Code reuse is a good thing. Alwsys reinventing the wheel means more bugs. Reusing code that’s proven its worth reduces the bug count. But shared libraries aren’t the only way to reuse code. Static linking is usually better, in my opinion.

      But there’s another form of code reuse that is often overlooked, even though we are always prattling about the virtues of open source: if the function you need is available in a library, why not copy and paste from its sourcecode? Surely that’s in the spirit of its license? Take what you need, and leave the rest.

      Reply
      1. lefty.crupps

        @Corky are there any distros that are as completely static-compiled as possible? Specifically, one with a heavy desktop; I’d like to see the performance…

        Reply
        1. Bryan

          The only ones I know of are still extremely experimental. StaLi and Sabotage. Both are also put forward by the guys at Suckless, so you’ll never seen gnome et all on them :/

          Reply
          1. lefty.crupps

            no Gnome sounds heavenly! But I suspect KDE is also excluded.

            Time to get off this email chain; I was waiting for a response to my comment (above) but I’ve been getting emails for Every Response On This Post, which is not at all what I signed up for.
            But, I’ll be back; I find links to here from tuxmachines.org and I always find this site to be an entertaining read.

            Reply
  7. Pingback: Links 22/4/2011: Linux References in Portal 2, Preview of Fedora 15 | Techrights

  8. keithpeter

    Hello All and happy holiday for those that are having one.

    15 years ago I was using an Acorn A3000 RISC computer and it could surf the web on a 28.8 kbs modem. It had 8Mb of ram and a huge 20 Mb hard drive. Acorn computers used a vector format to compose screens, so I could save the appearance of any Web page as a ‘draw’ file. I could barely play music files, and flash video would have been out of the question. Screen resolution was 800 by 600 on a 14 inch CRT (the line output transformer of which failed amid sparks and acrid smoke one day, most entertaining).

    Now when I surf the Web, I expect to be able to watch video embedded in the page while listening to a selection of music, and writing a worksheet or two on a word-processor. I personally choose to use Firefox, LibreOffice and Rhythmbox for those activities, hence a fair old RAM use on this 6 year old laptop. I like the open source/free software and I suspect the range of that software would be less if programmers worked in machine specific code.

    I could choose to use links2, TeX and mpg321 for those functions, and I might well give it a try over the summer. The point I am making is that we all have a choice.

    This laptop is running bodhi linux at present, based on Ubuntu 10.04 but faster and slimmer. It’s a good compromise for me right now. Your choices may be different.

    Reply
  9. Johnny

    All things being equal K.Mandala is pointing out that even massive increases in hardware compared to 10 years ago have the same issues. He is also pointing out this was first written 10 years ago. Sure we can do more graphics, video and games etc., but the same bottlenecks are still there. Why? I don’t agree it is all due to code but also to hardware and the heavy hand of manufacturers that want or need backward compatibility in some areas and in others problems that have never been solved like disk access compared to capacity.

    I find the post interesting because 10 years have gone by and there are undoubtedly issues in computing in general that have not progressed. Perhaps a paradigm shift is in order?

    Reply
  10. mother

    Copying code from libraries is the most sane suggestions that has been made here so far. That will enable you to use code libraries without all the bloat of the entire library being linked in.

    anyway just do your own thing and then you will eventually die and go to hell and burn in the pitts.

    Reply
    1. Mike

      Copying code from libraries could easily lead to problems. If you aren’t intimately familiar with the library, copying only the components you need could be dangerous. It could also lead to maintenance issues down the line.

      Isn’t the point of a library encapsulation? If you need something slimmer, I think it would be better to just write your own library.

      Reply
    2. Dennis Hodaplp

      While it seems like that’s the magical answer, there are programs that do make heavy use of certain libraries and by just copying the code in (or statically linking it) would be wasteful because that requires a much larger program to be loaded into memory without any advantage to other programs on the system. When you dynamically load a library, it is only loaded into memory once, which means that other programs can also use the same library without having to load their own instance of the library. This can be a great memory saver, but the point is that at least 1 other program should be using the library or dynamically loading is not of much use.

      Reply
  11. llewton

    “If your CPU is 20 times faster than hardware from a decade ago, why does it take the same amount of time — sometimes longer — to go from a cold start to online and reading e-mail?”

    Here’s another: if my computer is 100,000 more powerful than the one which monitored the landing on the moon, why aren’t I posting this from the Gamma Quadrant right now?

    Reply
  12. demonicmaniac

    Of course the case for bootup fails on the point of hardware initialization on not yet UEFI machines, good old BIOS and hw initialization and waiting for devices to settle, the SSD indeed does pretty much reduce boot time to as low as it can except for that hurdle.
    I also agree that much of the problem isn’t necessarily bad software design but bad standards design. Look at html4 vs html5. A html5 compliant browser and parser will necessarily need at least 4x the memory and 4x the processing power of a html4 one.Parsing XML configuration versus simple key=value pairs with a complex tree and much more text to read will also need a gob more memory and processing power.
    The move is towards making it piss easy for people to create something and hope hardware catches up to make it bearable for the end user, that trend has been here at least since the late 90s with the advent of intrepreter scripting languages with massive batteries included like python and ruby.
    That said, I’m still all with Kmandla on it being a perversity that the focus on bling and glitter and glitz causes, not swap and buffers, but real effective program usage from a 40mb with xvesa to 80mb on full xorg to 300mb using a Desktop Environment. Or god forbid simultaneous gtk2 gtkmm2 gtk#2 and all three in the gtk3 variant for a 100mb used up by shared libs alone.

    Reply
  13. ford white

    You have a couple of things going on, all of which I hate. The first thing is that novice programmers are writing large scale applications in high level languages. The second thing is that people are using graphics and effects for normal every-day things that do not need to be particularly flashy. And, in the end, people hated using command line stuff, and most people are intimidated by it. We have made a choice to give people something easy and pretty rather than something that worked efficiently. I honestly believe that most people’s needs could be met by 16bit CPUs, but of course, this would require them to actually learn something and we know that people are loathe to do such things.

    Mutt does email well. Netsurf’s console browser can do CSS and images, as well as some JS in the console. Vi, Emacs, Ne and many other editors can do some great stuff with text, and I bet we could get an office suite of some kind going for the Linux CLI. Transmission can do torrents in CLI. The list goes on. All of these are decently well written applications that require next to nothing in terms of resources. People just don’t like them.

    Reply
  14. Peter

    It is not really possible to buy a PC today with less than 2GB memory. For the last couple of years I have not been bothered by slow software. The boot time of my Fedora from Grub to log in prompt is 30 seconds, and I can live with that. What I can not understand is what is going on with the BIOS which now takes up more than half of the boot time.

    Reply
  15. Far McKon

    It’s pretty obvious if you look at it from the ‘who makes the stuff’ and it comes down to a simple pair of interacting rules:

    1) Premature optimization is the root of all evil.
    2) Developers have reasonably high-end machines
    ==========
    3) Programs are buit to run ‘just fast enough’ on a dev machine.

    Developers try to build something as quickly as possible, with the least work (aka, high level languages) so it’s usable by their book. The human annoyance factor (arrgh, this takes too long to load) is a constant, so that is the limiting factor in how much effort is put into optimization.

    :. Almost all software is written to be ‘just fast enough’ to not be annoying on a developers machine with a decent Internet connection.

    Reply
  16. Gumnos

    10 years ago…I’ve still got a box from that time period (an 800MHz processor, maxed-out at 384MB of RAM, and a new HDD in it). The unit came with WinME. As an experiment, I installed that WinME on a more modern machine (double the processor speed & RAM), as well as a Linux distro from that time period. Both were pretty darn snappy on bootup. If you’re willing to go with a more spartan OS, then you can compare more equally. Modern OSes do more and thus load more and thus take longer to load.

    Reply
  17. Chris

    Actually Linux has seen way more speed improvements than Windows and Mac. I recently upgraded my computer to an i7 and I noticed barely anything in speed improvements over my 2 year old motherboard.

    When I installed a solid state drive the improvements were amazing!! Application open almost instantly. I am going to get one that uses SATA3 and has faster read and write. That is the route to go if you really want speed improvements.

    One of the issues that everyone forgets is that the desktop can only go so fast. A user needs to be able to focus and so everything will stay at a certain speed for user focus. Another issue is that you actually adjust to the speed of whatever you are interfacing with. So you might notice a speed difference in the first couple of months but then you will have adjusted and the speed of an old computer or a new computer will make no difference.

    I have seen this happen with graphics as well. Our brains will improve poor graphics up to the level of reality in your mind, while super realistic graphics will be reduced to a level that your brain can deal with.

    Everything is adjusted to your perception in most cases.

    Reply
  18. JustPassingThrough

    I don’t think that the problem can be totally blamed on the programmers. Amateur programs are normally small and inconsequential in the overall working of the computer. The problem ones that are written by professionals are crap because of money.

    Case A: Closed-source programs are pushed out the door as soon as possible so they can be sold so the company can make money. The programmers are given deadlines that don’t allow them to recode their product the way I’m sure most of them would like to. It needs to work, not be efficient. The client can always throw more RAM or a larger hard-drive on the computer if they have an issue with that.

    Case B: Open-source programs are mostly written by people who have day jobs and don’t have enough time to make it “perfect” before they send it out. Otherwise, it would never get out of the “door.” [personally I think that people are so used to software that is crap from the big companies that they will settle for whatever they can get that works, even if they have to reboot or kill% occasionally]

    Yes, some programmers are lazy. Many are truly untalented. But I really think that of the ones to whom this is an art or a passion – most take pride and DO care, but just don’t have the time due to their deadlines or life in general.

    Reply
  19. DannyB

    There are a few reasons why:

    (Programmers are lazy.) Actually programmers are now more productive than ever. We use higher level languages. More abstractions away from the hardware. Automatic memory management (aka GC). These gains in programmer productivity are paid for in greater hardware cost to run the abstractions. Ask yourself this: you really want that next great release of software X. Would you be willing to have it six months sooner if it required an extra 256 MB of RAM to run?

    (Steve Gibson can write assembly . . .) Good for him. The arrow of abstraction points only one way. I could write in assembly too. It’s just not a productive way to write applications. If you disagree, do what you want. The world moves on.

    Applications today do way more than they used to. Some of it is in bells and whistles that you don’t notice until you need them. Some of it is fluff like animations and eye candy. Some eye candy is just amusing. Some is genuinely productive. And some is plain annoying.

    In 1984 Apple introduced the Macintosh. 128 K (not Meg) of RAM. An 8 MHz 68000 processor. I remember trade rag articles (this was before them intarwebs) incensed at the “waste” of computer power just to have a nice GUI. Laugh was on them. Now everyone uses GUI’s. It’s called progress. Get used to it.

    Our cell phones now have hardware (but also software!) that is *vastly* more powerful than what that 1984 Macintosh had. The file manager on my android phone is more capable and way more featureful and still easy to use.

    One man’s “bloat” is another man’s “features” and another programmer’s “productivity” enhancing higher level abstractions.

    Reply
  20. Hans Bezemer

    Sorry, but I don’t follow your drift. When new hardware becomes available, programmers will usually use it – having the argument that enough resources are there. I, for one example, don’t take that in consideration. My 4tH compiler has grown, surely, but not in the same way as the hardware. Consequently, my old benchmarks are useless, because they don’t give any accurate readings anymore. The size is still about 64K, the speed has only *increased* and you can still run it on MS-DOS. The complaint I get that “it hasn’t grown”. So, whatever route you take as a programmer, people won’t be satisfied. If you don’t follow the hardware, you’re “lagging”, if you do follow the hardware you’re “not adding any speed and features”. For developers it’s a no-win situation. There’s always somebody complaining.

    Reply
  21. Drone

    Gee, I must be missing something here:

    A general purpose distro and/or OS that is supposed to support a wide variety hardware platforms and then again a lot of third-party plug/unplug hardware connected to it will take a long time to boot because it has to load all the possible drivers. An OS that targets a specific and STATIC (form the most part) set of hardware and “approved” devices that can connect to it (e.g., the Apple Fortress). Doesn’t have this problem. That’s why you see “rich” fast-boot environments with “locked-up” devices in terms of hardware (and I must emphasize “Rich” in terms of graphical speed and user experience). If you tailor your boot scripts to deal only with your specific machine and what may or may-not connect with it, your boot times drop dramatically. This has been proven over and over again. We see this today in the likes of Netbooks that have a “fast-boot” version of Linux, it boots fast but only for that specific machine.

    Hmmm…

    Reply
  22. Pingback: (no subject) « Motho ke motho ka botho

  23. Tobias Mann

    Ok yeah lazy programmers yes, and the fact that most of the new hardware isn’t being accessed efficiently. Most programmers aren’t writing software to take advantage of multi core and multi threaded hardware. Plain and simple. Also there may be a feeling that because the hardware is so powerful it can just brute force its way through poor code rather than in the past where if you wanted a program to work it needed to be neat and efficient.

    Reply
  24. Mike Lockoore

    The bloated, slow apps using bloated, slow libreries in bloated slow operating systems (Windows) and distros (Fedora) drove me crazy and prompted me to develop small, fast apps in TinyCore Linux. I try to only use the built-in FLTK GUI library. I’ve made a 115K gui file manager, a 40K picture/slideshow viewer, and a similarly-sized clock/sound/battery tray-type app. Some of us are still trying to be efficient. Steve Gibson is an inspiration to me. I wish he kept sharing his projects.

    Reply
  25. Roger

    It is not lazy programmers. It is planned obsolescence. These companies need to support the hardware vendors that support them. They are creating a need for the public to replace working hardware so bolster the profits of the hardware companies so that in turn the hardware companies will support them. This is true even of apple since apple does not design and build all of the components that they build their machines out of.

    It is not just operating speed that changes but they introduce new interfaces or protocols that need new dedicated hardware to operate at speed. When they have a choice to modify an existing platform to enhance performance in a backward compatible way, they are just as likely to introduce a completely new way to connect it to a computer so that you have to replace your whole system just to use the new tech.

    What I do not understand is why Gnome 3 needs accelerated graphics except for a little polish on the look and feel. The underlying organization and operation certainly do not look that demanding. I know that they need it to look sparkly to shiny and new to compete with the proprietary offerings. It does nothing for people who are more utilitarian like me.

    I have a Windows 7 machine that I power up once every couple of months when there is some dreary need. I have all of the “special” graphics features turned off that I can. They do nothing but waste my time.

    Reply
  26. Robert Pogson

    I find the post very relevant to what I have seen in schools with diverse computers and diverse users. When I put GNU/Linux on a machine that formerly ran XP, boot times to the login screen are similar with a slight edge to GNU/Linux. When logging in however XP is 2 to 3 times slower. A lot of that is because XP starts swapping immediately on low-RAM systems even while it thinks it is pre-loading software that may or may not be used…

    There are a number of issues affecting boot speed. Size of code and number of files all reflect bloat. Another biggy is that the OS needs to load many files and most PCs have a single hard drive making it the bottleneck. On machines where speed of booting matters, I put multiple hard drives in RAID 1 and boot times improve dramatically. This is accelerated further with “dependency based booting”, that is the order in which files are loaded may be determined by dependencies and starting several processes to load things with different chains of dependencies really helps. I have seen virtual machines with dependency-based booting work in under 4s.

    After you have reduced bottlenecks to a minimum there is still another method of speeding booting/loading of apps, terminal servers. Have a big powerful machine idling with all the needed files in cache/RAM. A login on that machine will be 5X faster than a cold machine. Then connect thin clients with a minimal system and dependency-based booting and you have the optimal performance. I have used GNU/Linux terminal servers and thin clients in classrooms for years. When students get used to 2-5s logins to a working desktop and sub-2s loading of browser or word-processor, they feel pain when faced with the usual desktop with single hard drive.

    My terminal servers are using fast hard drives 500gB+ or SCSI and lots of RAM with a gigabit/s network interface to the clients. I can do this magic even with an 8 year old PC and beat the pants off a new PC. I would love to use SSD but so far the price/performance isn’t where it needs to be.

    Reply
  27. kache

    My 2 cents: the reason the programs now require more resources is that the programmers use the power of the new machines to add more options and more things to the program, so now it can do more things (possibly at the same time) and give the user more eye-candy. 🙂
    Also, just found out about this blog. Gonna add it to my greader feed.

    Reply
  28. Pingback: A serious reminder « Motho ke motho ka botho

Leave a reply to Dennis Hodaplp Cancel reply