M BUZZ CRAZE NEWS
// news

Creating a large size file in less time

By John Parsons

I want to create a large file ~10G filled with zeros and random values. I have tried using:

dd if=/dev/urandom of=10Gfile bs=5G count=10

It creates a file of about 2Gb and exits with a exit status '0'. I fail to understand why?

I also tried creating file using:

head -c 10G </dev/urandom >myfile

It takes about 28-30 mins to create it. But I want it created faster. Anyone has a solution?

Also i wish to create multiple files with same (pseudo) random pattern for comparison. Does anyone know a way to do that?

2

5 Answers

How about using fallocate, this tool allows us to preallocate space for a file (if the filesystem supports this feature). For example, allocating 5GB of data to a file called 'example', one can do:

fallocate -l 5G example

This is much faster than dd, and will allocate the space very rapidly.

10

You can use dd to create a file consisting solely of zeros. Example:

dd if=/dev/zero of=zeros.img count=1 bs=1 seek=$((10 * 1024 * 1024 * 1024 - 1))

This is very fast because only one byte is really written to the physical disc. However, some file systems do not support this.

If you want to create a file containing pseudo-random contents, run:

dd if=/dev/urandom of=random.img count=1024 bs=10M

I suggest that you use 10M as buffer size (bs). This is because 10M is not too large, but it still gives you a good buffer size. It should be pretty fast, but it always depends on your disk speed and processing power.

Using dd, this should create a 10 GB file filled with random data:

dd if=/dev/urandom of=test1 bs=1M count=10240

count is in megabytes.

Source: stackoverflow - How to create a file with a given size in Linux?

2

Answering the first part of your question:

Trying to write a buffer of 5GB at a time is not a good idea as your kernel probably doesn't support that. It won't give you any performance benefit in any case. Writing 1M at a time is a good maximum.

This question was opened 5 years ago. I just stumbled across this and wanted to add my findings.

If you simply use

dd if=/dev/urandom of=random.img count=1024 bs=10M

it will work significantly faster as explained by xiaodongjie. But, you can make it even faster by using eatmydata like

eatmydata dd if=/dev/urandom of=random.img count=1024 bs=10M

What eatmydata does is it disables fsync making the disc write faster.

You can read more about it at .

1

Your Answer

Sign up or log in

Sign up using Google Sign up using Facebook Sign up using Email and Password

Post as a guest

By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy