Thinking things through: dd over USB1.1

I’m still learning about dd. It took me a while to discover it, but now I use it on a daily basis (well, maybe not quite :roll:) to clone systems, backup entire drives, get visual snapshots of the data on a disk or to scramble the contents of a floppy or USB drive.

A couple of days ago I got a 60Gb hand-me-down hard drive that I didn’t want to look at, and arbitrarily plugged it into an always-on machine and put it to work with time dd if=/dev/urandom of=/dev/sda.

And as promised, it dutifully started dumping random information into the drive, churning away for 10 minutes, 20 minutes, an hour. …

By this point I was starting to itch to use the drive. But it was time to take care of some real-life issues, I let it run while I left the house.

When I came home at the end of the day I expected to see a report, but it was still running. I didn’t have time to fiddle with it, so again, I left it run, this time overnight.

In the morning, it was still going. And at the end of the day yesterday, it was still going. …

And this morning I decided to arbitrarily cut things short. I suspect that over USB1.1 I probably should have employed different flags or block sizes, to avoid a two-day session writing out random nonsense to a hard drive.

But I definitely blame myself for this little inconvenience — after all, I should have thought about what was happening: the transfer of 60Gb of information over USB1.1.

I’m no stranger to slow speeds over USB1.1, and so the little alarm bells should have gone off after only a few minutes. Even USB2.0 would take a while, for something of that size.

In any case, I have learned my lesson. If you want to blank a drive with dd, for goodness sake don’t do it over a slow USB connection. 😯

16 thoughts on “Thinking things through: dd over USB1.1

  1. totalizator

    Agreed. Yesterday I’ve made a partition backup of my AMD K6-3 533 Vaio laptop via USB 1.1 and despite there was only ~1.5 GB to write after compression (FSrchiver), it took few hours to complete.

    Reply
  2. Reacocard

    FWIW, if you do a “kill -USR1 ” on the dd process, it’ll print out statistics while it’s still running, so you can get an idea of how far along it is and how fast it’s running. 🙂

    USB 1.1 is horrifically slow (12mbit/s iirc), but that wouldn’t be slow enough to account for it taking as long as you describe. 60GB at 12 mbit/s would be only 11.4 hours. Maybe it’s not USB 1.1 but 1.0 and thus 6x slower? Or maybe urandom is just not fast enough to keep up with 12 mbit/s. :/

    Reply
    1. K.Mandla Post author

      Yes, I should have checked its progress before cutting it off, but I wasn’t in a hurry and the machine runs 24/7, so I let it go until I just felt finished with it.

      Both lsusb and dmesg suggest it’s a 12M port, so I suppose it could be just a slow interaction between /dev/urandom and the drive itself. I did notice that it tended to write in bursts, pausing and (I assume) caching the information to be written, then sending pulses to the drive all at once.

      htop marked the process as “uninterruptible” during those write phases. I wonder if that cache-and-pulse writing had anything to do with the delay? 😐

      Reply
    1. K.Mandla Post author

      Where do people buy these mystical magnets that everyone always suggests for killing a drive? And for goodness sake, what do they keep them in that doesn’t cause them to erase everything in the entire house? 😆

      Reply
    1. mulenmar

      +1 to this. Here’s the command I use:

      # nice -n -15 dd if=/dev/zero of=/dev/sdX

      sdX, of course, being the drive I’m blanking out.

      Reply
      1. Zach

        If you zero a drive, a compressed image created of that drive can be compressed a bit more than a disk filled with random data. IIRC

        Reply
  3. Mister Shiney

    I had to laugh when I read this, because for the past several days I’ve been trying to move a bunch of data from an old HDD I had in an external enclosure over to a new external HDD. No matter what I did, the enclosure was recognized at the 12Mbps speed (USB 1.1). It took over 30 hours to move the data. 🙂 I spent a lot of time reviewing oodles of bug reports about USB issues in Ubuntu and even went as far as upgrading to Lucid (I usually like to lag a few versions since stability is more important to me than being on the bleeding edge). Anyway, I went as far as to remove the drive from the enclosure and try in several others — generally with the results that the drive was recognized as high speed, reset by the system, and disconnected. It was not until later that I realized two things: jumper settings are very important in USB enclosures, and the first enclosure was more forgiving of jumper settings — also, one of the pins on the drive was broken off into the ribbon connector on the first enclosure. That last explained why the drive only worked in that one enclosure. Anyway live and learn — after 30+ hrs I copied the data off the drive and scuttled it.

    Reply
  4. Wavefunction

    There were a few posts on linux.com and other locations where people checked the throughput of /dev/zero, /dev/urandom, and various options while running dd. (dd is pretty much -the- secure deletion tool. If you want it shredded, use dd.) In short, /dev/urandom generated data at about 1/10th the speed of /dev/zero, even when accessing internal drives. If I recall correctly, the fastest /dev/urandom ever worked was something like 7.5MB/s.

    And, as anecdotal evidence, I /dev/urandom against a 1TB (1000GB) internal hard drive and it took about 38 hours to complete on the fastest machine in the house. (8GB ram, 2.8Ghz six-core processor, etc.)

    Reply

Leave a reply to mulenmar Cancel reply