Hi all,
I am query couchbase using memcached protocal via moxi proxy.
My environment is 24* cpu core, 196G memory, nvme ssd disk.
During compaction, the disk iostat -dx 1
output as follows.
Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
nvme0n1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
nvme1n1 0.00 0.00 216.00 28496.00 3236.00 113984.00 8.17 302.07 1.75 0.85 1.76 0.01 21.20
sda 0.00 6.00 0.00 37.00 0.00 172.00 9.30 0.06 1.51 0.00 1.51 0.05 0.20
sdc 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
sdb 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
nvme0n1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
nvme1n1 0.00 0.00 11.00 47315.00 48.00 189260.00 8.00 17794.98 284.04 69.00 284.09 0.02 99.90
sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
sdc 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
sdb 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
nvme0n1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
nvme1n1 0.00 0.00 4.00 63488.00 32.00 253952.00 8.00 12103.99 246.37 225.50 246.37 0.02 100.00
sda 0.00 0.00 0.00 3.00 0.00 16.00 10.67 0.00 0.00 0.00 0.00 0.00 0.00
sdc 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
sdb 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
nvme0n1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
nvme1n1 0.00 0.00 7.00 98304.00 32.00 393216.00 8.00 13535.54 128.20 143.43 128.20 0.01 100.00
sda 0.00 12.00 1.00 8.00 8.00 84.00 20.44 0.07 2.89 20.00 0.75 7.78 7.00
sdc 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
sdb 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
nvme0n1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
nvme1n1 0.00 0.00 1546.00 35234.00 82552.00 140936.00 12.15 4589.60 179.12 0.53 186.95 0.02 76.90
sda 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.01 52.00 0.00 52.00 8.00 0.80
sdc 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
sdb 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Because of io util is 100%, some read and write request keeps long time cost, cost up to 3 seconds which is unacceptable to the users.
I want to know how to limit the io usage during compaction.
Any help?