Knowing that my own perspective on computers and usability is slightly skewed from the norm, I occasionally hesitate to offer an opinion on certain topics. It’s not because I think I’m wrong, but because my own opinion is so far away from the median that I’m afraid I’ll be mistaken for a troll, or perhaps even a lunatic. (I don’t know which would be worse. … )
This time it was a thread called “overpowered,” asking why so many Ubuntu users seem to use machines far beyond what is technically required. I felt like answering, but knowing that I occasionally resemble the weirdo hermit living alone out on the mountain, I decided not to.
The answer, of course, is obvious. People buy newer, faster, higher-end machines — particularly multicore systems and cutting edge video cards — because they feel there is some sort of application that demands that power. And that can mean a dual-boot system for gaming, compiling power, virtual machines, rendering power, and so forth.
I can appreciate that — after all, one of the main reasons (aside from sentimental value) that I keep my Inspiron is that I need the “muscle” to do some of the compiling for other, older computers. Of course, nowadays, the idea of using a 1Ghz machine for compiling “muscle” is almost laughable.
At the same time, I can sympathise with the original poster’s question. I don’t think it inflammatory at all. For all the people who respond that they need that “power” for gaming or rendering or compiling, I’m wondering how many of them actually require it on a regular basis … and how many rarely, if ever, need anything beyond the comfortable 1Ghz I consider to be speed demon.
It’s not for me to say. I don’t know how much compiling or rendering or virtual machine use is allotted to the casual Ubuntu user. When I used Ubuntu on a daily basis, it was extremely rare that I needed to compile something, considering that the bulk of Ubuntu is prepackaged and ready for anything.
I do have another hypothesis — that the push for newer, faster hardware is a bit of an aftertaste from using Windows, or owning Mac machines. Call me crazy (plenty of people do ), but the blanket solution for most Windows users for any performance decay is invariably new hardware. Faster machines, more memory, a larger hard drive, a newer video card.
I used to follow that same trap (I use the word “trap” deliberately here) too, so please don’t be offended if I have somehow labeled you; I’m labeling me too. After all, you’re talking to the person who sank $3000 in a then-state-of-the-art Dell M170 way back in late 2005, only to sell it off again a few months later, once dual core machines hit the market.
But knowing that the best solution (short of switching operating systems) for poor performance was to sink more money in a computer … well, I may be crazy, but I think some people might be ingrained with the idea that better-faster-stronger is only possible with new components.
So reflexively, regardless of how long we’ve used Linux or how we came to meet it, everyone (me too, and I sometimes have to pinch myself as a reminder) naturally assumes more power is necessary, newer hardware is necessary, the latest and greatest is necessary.
Of course that’s only true if my original response — there exists some application which requires that power — is true. For me, and probably for the majority of “casual” computer users, I don’t think multicore, cutting-edge components are actually “necessary.” Checking e-mail? No, not really. Watching YouTube videos? Well, some power is required, but I can get it done at 450Mhz if I want. Gaming? Depends on the game, really.
Again, I don’t have an answer except for the obvious: People buy it because they do something that needs it. But on the other hand, I can’t help but wonder what use there really is, if we strip out all the compilers, the renderers, the virtual machine users, and we’re left with the day-to-day users, chatting, surfing, and playing Tetris.
Which by the way, is 25 years old, as of last week. How’s that for a roundabout closure to a blog post?