Categories
Mastering Development System & Network

iotune qemu/KVM total_iops total_bytes limits

I have created some qemu/KVM virtual machines on SSDs.

The host has ~6 SSDs. 1 is being used for the OS, and each of the other SSDs has two VM guests.

I’m using iotune to limit the capabilities of these drives:

<disk type='file' device='disk'>
  <driver name='qemu' type='raw' cache='writeback' io='threads'/>
  <source file='/var/lib/libvirt/images/sdd/pz/heavy/virtual-machine-1/os.img' aio=''/>
  <target dev='vda' bus='virtio'/>
  <iotune>
      <total_iops_sec>3000</total_iops_sec>
      <total_bytes_sec>125829120</total_bytes_sec> #120Mb
    </iotune>
</disk>

When I SSH to one of the VMs and run fio, these limits appear to work – I can’t get beyond 120Mb throughput or 3000 IOPS regardless of what configuration I use on fio (--iodepth, --bs --rwmixread, etc).

However, under load, when I run iostat -xm 2, I’m occasionally seeing the writes per second or tps jump above these limits.

With 2 VMs per drive, and these limits enforced, I should see a maximum of 6000 IOPS or 240Mb reads/writes. On the following output you can see this isn’t the case:

Device:            tps    MB_read/s    MB_wrtn/s    MB_read    MB_wrtn
sdd            6515.00         0.00        23.10          0         46


Device:         rrqm/s   wrqm/s     r/s     w/s    rMB/s    wMB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
sdd               0.00     4.50    0.00 7742.50     0.00    58.38    15.44     1.47    0.21    0.00    0.21   0.08  63.10

This isn’t a one-off either. I have 4 servers running identical virtual machine setups, and I’m seeing this happen fairly frequently across all of the SSDs and all of the servers:

Device:         rrqm/s   wrqm/s     r/s     w/s    rMB/s    wMB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
sdb               0.00     0.00    0.00 9588.50     0.00    33.59     7.17     1.47    0.16    0.00    0.16   0.06  52.85
sdd               0.00     0.00    0.00 8528.00     0.00    66.15    15.89     1.36    0.16    0.00    0.16   0.07  59.30

Is this a failing of my configuration, the qemu/KVM implementation or iostat’s interpretation of these disk’s performance?

Leave a Reply

Your email address will not be published. Required fields are marked *