When you put it that way …

These days, when I see posts or threads lambasting the command line, my reaction is almost amusement. In the old days I found them obtuse, but now that I am cresting a fifth year with Linux, I see them more as lack of education than willful exacerbation.

Still, threads asking when the command line will quietly submit are abrasive. It’s unfortunate that so many people see the command line as some sort of throwback, when in actuality it is often what’s doing the work — they just don’t know it. Perhaps the most diplomatic way to bridge the two sides is with this kind of answer: that the appeal might lie with not needing to use the command line.

But as I’ve mentioned in the past, for some people (like me) that idea is exactly the opposite: The appeal lies in not needing to use a graphical interface. Wishing for an end to the command line is just as ludicrous and just as asinine to some people (like me) as it would be for me to openly pine for a day when graphical interfaces are finally abolished.

When I put it that way, it sounds quite silly, doesn’t it? 😐

18 thoughts on “When you put it that way …

  1. Walter

    My 2 cents on this.
    I find command line useful in many circumstances. As such, I find the availability of several command-line tools, one of the advantages of Linux over Windows (think of something simple as resizing/converting a bunch of images: it’s actually easier via command line that with a GUI – at least in my experience).
    This said, I think what scares people off (including myself sometimes) is finding out exactly what to do.
    For example: I was going earlier through an previous post of yours on lightweight graphical sys in Ubuntu.
    As I installed openbox, I wondered: how could I have possibly figures out that I was supposed to install openbox AND obconf + obmenu to get the system setup?
    I know that ‘technically’ you don’t need obconf and obmenu, but, I assume, in 99% of the situations, if you install openbox you really want to have those 2 as well.

    You see the point?
    I have no problem going through command line, BUT, if I need to install a piece of software, it’s quite convenient to have one file, double click it, and let it do the rest (I’m going to say it, don’t shoot me: “Like in Windows”).
    I guess there must be a good reason why, but I really cannot get myself to imagine what would that be, why there isn’t a “openbox-install” which would take care of installing ALL needed packages. It seems to me that it’s something so simple and practical and would make the lives of millions so much easier …

    Can you share your thoughts on this?
    Walter

    Reply
    1. Nugnuts

      “I find the availability of several command-line tools, one of the advantages of Linux over Windows”

      Technically, Windows has a command line tool as well (cmd.exe), though it is certainly not typically discussed with as much prominence as the various shells in Unix-like systems. I, for one, do not know how to use it with much proficiency at all, though I certainly know that command-line Windows gurus exist.

      “I guess there must be a good reason why … there isn’t a ‘openbox-install’ which would take care of installing ALL needed packages.”

      There isn’t necessarily a very good reason. Ubuntu, for instance, has several metapackages which do exactly that type of thing. Which is to say, installing whatever metapackage pulls in all the requisite package dependencies for whatever software is being installed. However, having separate packages is useful for those who might not want related, though distinct programs (for instance, one might prefer to manually configure one’s openbox installation by hand-editing config files, and not want obconf installed on one’s machine). Having a metapackage for just two or maybe three packages might be seen as too much clutter for the package management system. I would guess it also could depend on how often such a metapackage would be used. I don’t manage any distributions myself, so I cannot say with any type of authority one way or another.

      So, anyway, the lack of a metapackage for any particular set of software isn’t a deficiency with the command line interface per se. The situation you describe with openbox would be comparable in my view to installing a web browser (even on Windows), and separately needing to install, say, a flash plug-in. Not everyone wants the flash plug-in, of course, but I imagine the percentage of people who do would be similar to the percentage of openbox users who would like to take advantage of obconf. Admittedly, there is the distinction that the browser writers are distinct from the flash plug-in providers, but I don’t think that is really paramount here. My point is that even GUI-based installers can suffer from similar multi-install software dependency issues, so overall it’s not really a question of interface, but of managing innumerable combinations of dependencies and configuration options optimally. (And in terms of interface, I acknowledge that the GUI-based browser will inform you of the need to install the separate flash plug-in. But similarly a command-line system like apt will inform you of suggested packages–you can even use the –with-recommends option for aptitude to automatically install them, which might be the short answer you’re looking for.) Either way, there’s really nothing stopping someone from packaging up all the openbox related things into a single .deb or .rpm or .tgz or whatever like one might package up everything in a single, double-clickable GUI installer.

      That’s my excessively verbose perspective, for what it’s worth. Hopefully it was helpful.

      Reply
      1. Walter

        Nugnuts,
        thanks for the reply. I don’t mind lengthy ones, especially if constructive and useful πŸ™‚
        I am familiar with Win’s cmd: I have been using MS since DoS 5.0 and I have a few batch scripts even today (e.g.: to cleanup rundll’s 8) ). In Linux though, because so many people use command lines, there’s a plethora of useful tools which you can see have been designed to be efficient. Take mogrify, for example. Powerful, simple and quick image resize/modify tool. Can’t match that with a gui. I could probably find a similar cmd line tool in Windows, but I’d have to install it, make sure the executable is in the “path” so that I can access it from anywhere … and all this while dealing with long directory names in an environment that does a half-ass job to support it.

        Back to Linux: my example on openbox is just that: an example. There are tons more. To install Firefox I could use sudo apt… firefox, but also sudo … firefox-3.6. If I pick Firefox, do I miss out on something? Which version will be installed? And, most important, how do I find out the answers to those questions?
        I have no problem supporting separate packages for max flexibility, but the way it works today is just not efficient. Maybe my expectations are unreasonable, or maybe I am missing the point, or it’s simply that I don’t know enough about Linux to “see” what to do.
        What amazes me is that so many people put so much effort in writing tools that are so handy, and then they “fail” to make those tools accessible to everybody in a simple/handy way.

        I do follow your comparison with browsers and Flash plugins and it is somewhat relevant, but you see how practical it is to install a missing plugin? You get a clear message “Hey, you need this plugin to view this box” and a very clear action “Click here to install it”.
        The equivalent would be that if a “firefox-read-me” or “firefox-main” or whatever, will tell me exactly what do I get if I install each of those packages, without having to scavenge through Wikis. Maybe I’m asking too much, but it seems a minimal effort compared to the building of the app itself.

        Reply
        1. Nugnuts

          Hmm … perhaps I don’t quite understand what you’re after. Command-line package managers, like APT, generally do tell you what you will get should you install a given package. A simple ‘apt-cache show firefox’, for example, will tell you which version of firefox that package would install, in addition to a bunch of other information, like a description of the software in the package. The aptitude utility would also be able to provide these details.

          So I guess that’s one way to find answers. But perhaps you’re also asking about how to know how to find answers? Forgive/ignore the following if that’s not what you’re after. To install something on an APT-based system, I guess you would have to at least be familiar with the fact that you’ll be using the apt-get command (or perhaps aptitude). With generally any command line program, you can always bring up its manual/help information with a call to man. Doing a ‘man apt-get’ gives a bunch of information about how to use apt-get, and also suggests looking into the apt-cache command (in the “See Also” section near the bottom of the man page). So one could/should then also do a ‘man apt-cache’ and would see how to use that. As for knowing that one should invoke man to begin with, well, running ‘help’ at the command line suggests doing exactly that. The output of help also recommends the ‘info’ command, which is another way to get helpful information. I think typing help at the command line is generally analogous to a help menu on whatever GUI application. (Of course, even with just text, you can pull off some semi-fancy ‘GUI-like’ functionality such as pull-down menus and whatnot, facilitated by libraries like ncurses–aptitude, for example, has such menus.)

          So I would say the information you’re after is all there; it basically just needs to be read. One could certainly argue that reading through a bunch of man pages and help files is clumsy and not as intuitive as using a GUI and seeing various menu options and what-have-you. But then, that’s just a necessary difference between two interfaces. The command line (basically) only has text available, so that is just the way it works.

          But maybe I’m not actually addressing your issue?

          Reply
          1. Walter

            Nugnuts,

            let me give you a practical example.
            I started from CLI, and added openbox. I think a dock would be good, so I search and find several. I pick cairo and awn.
            Aptitude tells me that there are 10 cairo pkgs. Which one do I install? cairo-dock seems a good guess. Do I also need cairo-core? And what about cairo-dock-plug-ins? And cairo-dock-data? This last one seems quite important too.
            With awn things are a bit easier “only” 5 packages, but the concept is the same. Are we’re talking about a relatively tiny utility.
            Sudo aptitude install openoffice shows 563 packages (!!!), just to give you a perspective.
            I have no issue with setting things up so that users could cherry-pick their packages, but I think/assume, that the vast majority of users want “just” to install a dock, or Openoffice.
            Why does this need to become a research study before being able to find out what to really install and what not?
            And let’s not go into the dependencies, libraries …
            Perhaps my many years of Windows have rewired my brain and I’m a lost cause πŸ˜‰ BUt what’s wrong with having a “cairo-full-install” or a “openoffice-full-install” item (all with the same ‘post-fix’), or, if you want, the equivalent of a “setup” command?

            Reply
            1. Nugnuts

              I don’t think there’s anything wrong with such packages, but I’m not entirely sure they do not already exist. For the “‘openoffice-full-install’ item”, for instance, are you asking for something distinct from the openoffice.org metapackage, detailed here: http://packages.ubuntu.com/karmic-updates/openoffice.org ? (That big blurb of text describing the package and recommending other packages is what an ‘aptitude show openoffice.org’ displays for you, right?)

              You should be able to automatically pull in and install all those suggested packages when installing just the openoffice.org package by using the –with-recommends option mentioned above.

              Reply
  2. ajlec2000

    These days 75% of my enjoyment of computing is learning how it all works. I couldn’t do this without using the terminal. Even when I’m using a GUI in a program there is a terminal emulator open as well.

    Reply
  3. Ferrenrock

    I think both sides are missing the point here. It’s extremely counterintuitive to use only one of these two systems on a computer–do you really expect someone to view online videos, or edit images only using a command line? Even if you add an image buffer, you are still losing out on the aesthetic of the webpage.

    In contrast, using anything beyond the command line for *nix administrative tasks strikes me as far too excessive. Whilst many ubuntu and mac users may dismiss this as an anachronistic, pedantic method of working with computers, the fact is that the command line gives you a very raw, honest example of what exactly you are executing, sending, viewing, and changing. There are no secrets, and everything that has been done by both you and the computer is put in plaintext in front of your eyes, not behind some widget toolkit and a thousand windows and desktop backgrounds.

    Reply
    1. anonymous coward

      I actually do use command line tools to batch-generate/manipulate images. Think reproducible vector graphics for LaTex/the web/fun. Or maybe PoVray or Renderman stuff…

      And about aesthetics of webpages… aren’t we already past the point where people can force some layout on me when the only thing I’m interested in is the actual information on that very site?

      I agree with one point, though: Raw output to stdout/stderror is a good thing. The problem is that sometimes the GUI tries to outsmart such output and “interprets” it in a wrong/insufficient way. Or maybe CLI and GUI versions aren’t compatible…

      Reply
    1. Peter

      The shell which provides the command line is an incredibly powerful scripting and even dare I say it programming tool, and while sure you could replace all those funky little scripts that u*ix type operating systems seem to rely on with some pre-compiled binaries it would all be a whole less tweakable and and an whole lot more like certain well known commercial operating systems. The shell also provides a built in scripting/programming environment for every u*ix type system that any user can come to grips with providing they are willing to read the documentation and to some learning.

      Reply
  4. CorkyAgain

    I’m a CLI/TUI fan like you, but I have to disagree with one of the commenters in that forum thread.

    It IS possible to program without ever touching the commandline. Many people who use IDE’s never do.

    Personally, I abhor IDE’s — unless you consider vim an IDE. πŸ˜‰

    Reply
    1. anonymous coward

      It might be true that using IDEs might make programming mora accessible to more people. Whether that’s a good thing or not, I don’t know. But I’d rather trust a program/executable from someone who is able to write Makefiles him/herself and knows what each command line parameter of the compiler does. Such people usually RTFM, and that’s what gives me confidence. Or hope, at the very least. πŸ˜‰

      Reply
  5. Chris Webstar

    I wouldn’t dare calling someone who doesn’t use the console a “coder”. Ever. The very idea is short of funny.

    That matter settled, there are uses for everything. I can’t live without the console, but neither would I consider going console-only. It’s a deal of using the right tool for the job at hand. If your jobs don’t include console tasks, then you’re golden not using it, but telling the console to go and die in a corner just because YOU don’t need it is absurd.

    Reply
  6. JakeT

    I tend to agree that this sentiment of ‘when are we going to get away from the command line’ is kind of silly.

    Administering Linux is best done via CLI. Administering Windows is best done through the Control Panel, registry editor and a bewildering array of hidden gui panels. I have no idea how to administer OS X, but the point is that each OS has its BEST way, and on Linux that’s the command line.

    We just need to get better at teaching people how to use it.

    Reply
    1. mulenmar

      And people need to realize that “Linux command line” != (“DOS”, “Ancient”, “Archaic”, “Incomprehensible by definition”)

      Reply
      1. Chris Webstar

        Indeed, especially the first part. DOS is an ancient relic of times past. The linux console, in whatever shell flavour, is a powerful tool. Big emphasis on “powerful”. On linux, there is nothing you can not do on the console.

        Reply
        1. mulenmar

          Actually, FreeDOS is supporting stuff that MS-DOS never did. So it’s not ENTIRELY a relic.

          As for the “nothing you cannot do”, AMEN!! You just have to configure it right.

          That reminds me, I need to look up how to configure Midnight Commander to open graphics files with that framebuffer-based viewer…

          Reply

Leave a reply to ajlec2000 Cancel reply