What machine setting can I achieve 100K update/s?

I was testing on E5 2630v3(x2) 96GB RAID10 240G SSD machine(2 nodes) CentOS 6.5.
Client machine is almost same except OS(Win2012R2).
I’m using .Net client SDK ver 2.5.4.
The max throughput is 40k with upsert . CPU usage max 15~20%. Network is 1G.
Bucket setting with password and 1 replica.
I tried to use Parallel and UpsertAsync with massive List.
Turns out Paraellel get me the most throughput.
class used for test:

public class Transaction
{
    /// <summary>
    /// Transaction ID
    /// </summary>
    public string id { get; set; } = string.Empty;
    /// <summary>
    /// price
    /// </summary>
    public double p { set; get; } = 0;
    /// <summary>
    /// volume
    /// </summary>
    public long v { set; get; } = 0;
    /// <summary>
    /// buy side user
    /// </summary>
    public string bu { set; get; } = string.Empty;
    /// <summary>
    /// sell side user
    /// </summary>
    public string su { set; get; } = string.Empty;
    /// <summary>
    /// buy side order id(ticket number)
    /// </summary>
    public string bt { set; get; } = string.Empty;
    /// <summary>
    /// sell side order id(ticket number)
    /// </summary>
    public string st { set; get; } = string.Empty;
    public DateTime dt { get; set; } = DateTime.Now;
} 

Am I doing something wrong?
Please advice.

How large are your documents - at 1GbE and 100K update/s, line-speed will be 1250 Bytes; minus the various protocol overhead.

How many client nodes are you using, and what batch size? Generally speaking the bottleneck becomes making sure you have sufficient operations in flight for the given network latency (bandwidth delay product).

(as an aside, you example code seems quite mangled so I couldn’t really follow it, Could you try reformatting - adding a 4-space indent will render as code:

My code here;

reformated and thx.

Its a clean start. (I flush everytime to redo the test)
The document size is the Transaction class.(should be below 300 bytes)

Please could you share an example of how you are scheduling the work with the SDK and the SDK config?

Async is usually more performant than Parallel due to how work is scheduled with the thread pool. For example, a simple way to schedule async work could like this:

var tasks = Enumerable.Range(1, 10000)
    .Select(x => bucket.UpsertAsync($"key-{x}", new {}));
await Task.WhenAll(tasks);

PS we also have an load test tool built on top of the .NET SDK (MeepMeep) that might help exercise the client.

couldn’t get meepmeep to compile on windows…
but thx anyway.

Finally, I’ve edited the project and compiled with vs2015. I changed 2 things in meepmeep.

  1. replaced the key by Transaction_{Guid}
  2. Document Content replaced by my Transaction class
    this is the result:

[Completed workload: ExecuteStore (Add) of 1000000 JSON doc(s) with doc size: 112.]
[Workload size: 1000000]
[Total docsize: 112000000]
[Total operations: 1000000]
[Failed operations: 0]
[Total time (ms): 34540.0996]
[Avg operations per/second: 28952]

This is the result I get from dual core E5 2690v3(bandwidth is around 60Mbps):
[Completed workload: ExecuteStore (Add) of 1000000 JSON doc(s) with doc size: 112.]
[Workload size: 1000000]
[Total docsize: 112000000]
[Total operations: 1000000]
[Failed operations: 0]
[Total time (ms): 29503.0853]
[Avg operations per/second: 33895]

and I found if I use Linq with UpsertAsync, my dual core E5 2690v3 CPU will shot up to 80% avg 40-80% CPU, so I guess I will stuck with connection pool and parallel options since the cpu is stable within 15-20%.

I think the only advantage of MUX Async method is bandwidth, I found it is using less bandwidth.

Edit: with 2690v3 I got my max tps around 70-80K with connection pool and parallel, bandwidth usage is around 200-233Mbps.