increasing drain rate from the disk queue ?
Is there any way to increase the drain rate from the disk queue ? So for example - I've got several million items in the write queue, and i see it draining at about 25k a second. From the sar data, it looks like the disk utilization of the disk array is pretty low about 15%. So why isn't it draining any faster ?
Average: DEV tps rd_sec/s wr_sec/s avgrq-sz avgqu-sz await svctm %util
Average: dev8-0 14.98 0.00 294.75 19.67 0.00 0.29 0.04 0.07
Average: dev8-16 2795.85 12.23 323854.35 115.84 1.99 0.71 0.05 14.35
Average: dev253-0 35.91 0.00 287.31 8.00 0.02 0.48 0.01 0.05
Average: dev253-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Average: dev253-2 40465.81 12.23 323714.55 8.00 131.67 3.25 0.00 14.39
Average: dev253-3 0.93 0.00 7.44 8.00 0.00 0.32 0.11 0.01
On a related note, how do you size RAM to account for the disk queues for documents in memory that are being updated (not new documents) since they go onto the write queue first ?
Assume each document is 1K, if i have 100k updates coming in a second, and the drain rate is 25k - those 75k updates need to be buffered. From the additional memory usage statistics, it does not look like I'm using 75k * 1k of additional memory -- its much less. From my rough calculation seems like for every updated document in the write queue, it looked like about 100 bytes or so ... so obviously the full 1k record is not stored in the write queue, a pointer to the actual data in cache + some housekeeping structures perhaps ? Whats the right way to calculate this ? (from the couchbase sizing guide it did not look like it accounted for disk IOPS, drain rate etc when lots of updates were happening)