Glam Prestige Journal

Bright entertainment trends with youth appeal.

I'm trying to check the write speed of different devices that I have using the following:

dd bs=1M count=256 if=/dev/zero of=/path/to/device oflag=dsync

I want an accurate reading of write speed and I was wondering if I should have any considerable speed difference by using a file that isn't just zeros, or if using /dev/zero is a reasonable way to test write speed.

3

3 Answers

Here's a test of /dev/zero's throughput on my system:

$ dd if=/dev/zero of=/dev/null bs=1M count=1000000
1000000+0 records in
1000000+0 records out
1048576000000 bytes (1,0 TB) copied, 65,2162 s, 16,1 GB/s

There's no other bottleneck than the CPU's cache speed here. That means that in my system /dev/zero can generate up to 16,1 GB/s of zeros, so it definetly should be fast enough for your purpose.

14

Not always. I was recently testing a disk in exactly the same way and /dev/zero tricked me into thinking I had the performance I needed because the external disk was using NTFS disk compression. At first I tried using /dev/urandom to fix this problem, but I discovered that tricked me into thinking that things were going too slowly. If you want to do this without being tricked then you need to write a random file to a tmpfs location and copy that file to the destination disk.

dd if=/dev/urandom of=/tmp/temp-random.img bs=1G count=1 iflag=fullblock oflag=dsync
dd if=/tmp/temp-random.img of=/path/to/device/temp-random.img bs=1G count=1 iflag=fullblock oflag=dsync

Please note that this assumes that /tmp is mounted as tmpfs, if that's not the case then you should mount a temporary filesystem and use that instead:

sudo mkdir /mnt/tmp
sudo mount -t tmpfs tmpfs /mnt/tmp/
dd if=/dev/urandom of=/mnt/tmp/temp-random.img bs=1G count=1 iflag=fullblock oflag=dsync
dd if=/mnt/tmp/temp-random.img of=/path/to/device/temp-random.img bs=1G count=1 iflag=fullblock oflag=dsync
sudo umount /mnt/tmp/
sudo rmdir /mnt/tmp

Also note that you need to make sure that your file is large enough to be bigger than the cache of the disk (set count=N much bigger than the cache size in GiB).

Perhaps /dev/zero isn't a realistic source; stable but not realistic. You'll get maybe the fastest rate, but IRL the information isn't usually that organized. In order to check this, run a couple of tests using /dev/zero and /dev/urandom and compare the rates. Another point you must take into account is the bs parameter. If you don't optimize this value, the precision will be poor once again. The optimal size depends on a number of factors (i.e. the SO, the architecture, some characteristics of the device you're writing on,...). I've just run this test against my SSD; just ten seconds of writing:

dd if=/dev/zero of=./output_file
2968447+0 records in
2968447+0 records out
1519844864 bytes (1,5 GB, 1,4 GiB) copied, 8,52739 s, 178 MB/s
dd if=/dev/zero of=./output_file bs=4096
942211+0 records in
942211+0 records out
3859296256 bytes (3,9 GB, 3,6 GiB) copied, 17,3544 s, 222 MB/s

So, 4096 looks a good block size in my hardware but I needed to run a long battery of tests in order to determine it. And... the largest bs, the better?. Not always.

1

Your Answer

Sign up or log in

Sign up using Google Sign up using Facebook Sign up using Email and Password

Post as a guest

By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy