4

Have a server running centos 7.6, and it has 4 ssd's as Raid-0 mounted as /scratch/

I have the linux program stress-1.0.4-16 and I just learned stress-ng existed.

Is there a way with stress to tell it to do I/O operations to stress a specific set of disks such as my 4 disk Raid-0? Or does it only work on whatever the root file system is such as /tmp? And if that's the case I've done systemctl enable tmp.mount which means the /tmp folder is no longer on disk negating the disk i/o function of stress ?

2 Answers 2

2

You just need to cd to a directory in the disk drive you want to stress. Using -v option confirms it creates the stress files on the current directory.

# stress -v -d 1 
stress: info: [29736] dispatching hogs: 0 cpu, 0 io, 0 vm, 1 hdd
stress: dbug: [29736] using backoff sleep of 3000us
stress: dbug: [29736] --> hoghdd worker 1 [29737] forked
stress: dbug: [29737] seeding 1048575 byte buffer with random data
stress: dbug: [29737] opened ./stress.yPWMGk for writing 1073741824 bytes
stress: dbug: [29737] unlinking ./stress.yPWMGk
stress: dbug: [29737] fast writing to ./stress.yPWMGk

I also suggest you use fio instead if you need a more accurate tool.

0

Stress test using fio:

RedHat/CentOS:

sudo yum -y install fio

Debian/Ubuntu:

sudo apt install fio

a 10 min benchmark:

$ fio --name=test --ioengine=sync --rw=randwrite --bs=4k --numjobs=1 --size=1G --runtime=10m --time_base

test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=sync, iodepth=1
fio-3.12
Starting 1 process
test: Laying out IO file (1 file / 1024MiB)
Jobs: 1 (f=1): [w(1)][100.0%][eta 00m:00s]                           
test: (groupid=0, jobs=1): err= 0: pid=924807: Wed Aug  6 11:04:48 2025
  write: IOPS=44.3k, BW=173MiB/s (181MB/s)(102GiB/603625msec); 0 zone resets
    clat (nsec): min=1742, max=1431.1k, avg=3418.70, stdev=1889.93
     lat (nsec): min=1815, max=1431.2k, avg=3530.50, stdev=1937.75
    clat percentiles (nsec):
     |  1.00th=[ 2192],  5.00th=[ 2768], 10.00th=[ 2832], 20.00th=[ 2896],
     | 30.00th=[ 2960], 40.00th=[ 2992], 50.00th=[ 3024], 60.00th=[ 3088],
     | 70.00th=[ 3152], 80.00th=[ 3344], 90.00th=[ 4512], 95.00th=[ 5536],
     | 99.00th=[ 9408], 99.50th=[10048], 99.90th=[13376], 99.95th=[14400],
     | 99.99th=[19072]
   bw (  KiB/s): min=13536, max=1168848, per=100.00%, avg=623553.31, stdev=302011.17, samples=343
   iops        : min= 3384, max=292212, avg=155888.34, stdev=75502.79, samples=343
  lat (usec)   : 2=0.10%, 4=83.58%, 10=15.79%, 20=0.52%, 50=0.01%
  lat (usec)   : 100=0.01%, 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01%
  lat (msec)   : 2=0.01%
  cpu          : usr=5.27%, sys=21.47%, ctx=39684, majf=0, minf=203
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,26738689,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
  WRITE: bw=173MiB/s (181MB/s), 173MiB/s-173MiB/s (181MB/s-181MB/s), io=102GiB (110GB), run=603625-603625msec

Disk stats (read/write):
    md2: ios=0/527583, merge=0/0, ticks=0/0, in_queue=0, util=0.00%, aggrios=0/533334, aggrmerge=0/5732, aggrticks=0/2386555, aggrin_queue=2356744, aggrutil=69.81%
  sdb: ios=0/533561, merge=0/5796, ticks=0/1995883, in_queue=1971388, util=53.01%
  sda: ios=0/533108, merge=0/5669, ticks=0/2777227, in_queue=2742100, util=69.81

In this case fio writes random data into file test.0.0 located in current work dir. The filesystem uses 2 SSDs in RAID1 (md2). Ideally both drives should have the same performance characteristic.

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.