I wanted to test that mischevious hard drive, to avoid problems when I move stuff on to it.
The plan was to fill it with dummy files and if there was an issue at the 40Gb point, I would know up front. Which would have been better than finding out a week later.
To that end, the weak-sauce tip for the day becomes this one: Creating a series of 2Gb files made of random gibberish, just to take up space.
for i in {1..20} ; do time dd if=/dev/urandom of=test-{$i}.file bs=268435456 count=8 ; done
The results, after a considerable amount of time (/dev/zero is faster), will be twenty files all 2Gb in size, filled with gunk. Good gunk, that is.
A little tip there: The block size multiplied by the count gives you the size of the file. So what?
So simply setting the block size to one gigabyte (or gibibyte, since I seem to be drawing flak on the issue these days 🙂 ) might cause memory errors on a machine with only 512Mb or less. It did for me.
The size of the file in my case wasn’t really important I guess, but I did get that quick primer for performing this stunt on low-memory machines. Reduce the block size, magnify the count, get the same results.
Oh, and the hard drive? It’s fine. I filled it all the way to the brim, and Arch didn’t complain. Good to know.
Is there any particular reason you use random gibberish? I wanted to fill a disk the other day and simply used dd to create multiple files from /dev/zero.
No real reason, I guess. At the time my fear was that only zeroes wouldn’t trigger an error, if there was some sort of limit to the drive. Now that seems silly though. 😳
Doesn’t filling files with zeros make them sparse? It would be pointless to test disk with sparse files.
Pingback: A bash loop, for pacman « Motho ke motho ka botho
It would probably be more appropriate to use a smaller block size and a larger count, since all the larger blocksize is achieving is wasting huge amounts of ram during the process.