I am looking for a command to create multiple (thousands of) files containing at least 1KB of random data.
For example,
Name size
file1.01 2K
file2.02 3K
file3.03 5K
etc.
How can I create many files like this?
I am looking for a command to create multiple (thousands of) files containing at least 1KB of random data.
For example,
Name size
file1.01 2K
file2.02 3K
file3.03 5K
etc.
How can I create many files like this?
Since you don't have any other requirements, something like this should work:
#! /bin/bash
for n in {1..1000}; do
dd if=/dev/urandom of=file$( printf %03d "$n" ).bin bs=1 count=$(( RANDOM + 1024 ))
done
(this needs bash at least for {1..1000}).
bash for numerous reasons, including $((…)) and $RANDOM. Even $(…) might not exist in every shell.
bash nor did they originate in bash ({1..1000} comes from zsh, for n in...; done and variable expansion comes from the Bourne shell, $(...), $((...)) and $RANDOM come ksh). The features that are not POSIX are {1..1000}, $RANDOM and /dev/urandom.
"%04d" in which case bash or zsh can do {0001..1000} with no printf
A variation with seq, xargs, dd and shuf:
seq -w 1 10 | xargs -n1 -I% sh -c 'dd if=/dev/urandom of=file.% bs=$(shuf -i1-10 -n1) count=1024'
Explanation as requested per comments:
seq -w 1 10 prints a sequence of numbers from 01 to 10
xargs -n1 -I% executes the command sh -c 'dd ... % ...' for each sequence number replacing the % with it
dd if=/dev/urandom of=file.% bs=$(shuf ...) count=1024 creates the files feeded from /dev/urandom with 1024 blocks with a blocksize of
shuf -i1-10 -n1 a random value from 1 to 10
This uses a single pipeline and seems fairly fast, but has the limitation that all of the files are the same size
dd if=/dev/urandom bs=1024 count=10240 | split -a 4 -b 1k - file.
Explanation: Use dd to create 10240*1024 bytes of data; split that into 10240 separate files of 1k each (names will run from 'file.aaaa' through 'file.zzzz')
This will create 15 files each contains 1MB of random data:
for i in {001..015}; do head -c 1M </dev/urandom >randfile$i; done
You can do something like this:
#!/bin/bash
filecount=0
while [ $filecount -lt 10000 ] ; do
filesize=$RANDOM
filesize=$(($filesize+1024))
base64 /dev/urandom |
head -c "$filesize" > /tmp/file${filecount}.$RANDOM
((filecount++))
done
Similar to lcd047's answer (it works with bash, even git bash on Windows), with the addition that the names are also randomized and of 1MB in size (might be more efficient disk-wise):
#! /bin/bash
s=$(hostname).$(pwd).${RANDOM}
for n in {1..1000}; do
x=$( printf %04d "$n" )
dd if=/dev/urandom of=$(echo "${x}.$(date +%s).${s}" | sha256sum -z | awk '{print $1}')$( printf %03d "$n" ).bin bs=1 count=1048576
done