Table of Contents
To understand some of the configurable command-line flags in
wait_queue_timeout, etc), it can be helpful to
follow a request through moxi...
The normal flow of data for moxi is as follows:
A client connects
A client creates a connection (an upstream conn) to moxi. moxi's -c command-line parameter ultimately controls the limits on the maximum number of connections.
In this -c parameter, moxi inherits the same behavior as memcached, and will stop accept()'ing client connections until existing connections are closed. When the count of existing connections drops below the -c defined level, moxi will accept() more client connections.
The client makes a request and the wait queue
Next, the client makes a request — such as simple single-key command (like set, add, append, or a single-key get).
At this point, moxi places the upstream conn onto the tail of a
wait queue. moxi's
controls how long an upstream conn should stay on the wait queue
before moxi times it out and responds to the client with a
The concurrency parameter
Next, there's a configurable max limit to how many upstream conn
requests moxi will process concurrently off the head of the wait
queue. This configurable limit is called
concurrency. (This formerly used to be known,
perhaps confusingly, as
downstream_max configuration flags are treated
The concurrency configuration is per-thread and per-bucket. That
is, the moxi process-level concurrency is actually
concurrency X num-worker-threads X num-buckets.
If you run moxi yourself (client-side moxi), the default
concurrency configuration value is 4. In contrast, the membase
product spawns server-side moxi configured with a
downstream_max value of 1024. That
means server-side moxi will concurrently process 1024 upstream
conn requests from the head of the wait queue.
Taking the concurrency value of 1024 as an example, if you have 4 worker threads (the default, controlled by moxi's -t parameter) and 1 bucket (what most folks start out with, such as the "default" bucket), you'll have a limit of 1024 x 4 x 1 or 4096 concurrently processed client requests in that single moxi process.
The rationale behind the concurrency increase to 1024 for server-side moxi configuration is due to the evolving design of moxi. Originally, moxi only had the wait queue as its only internal queue. As more, later-stage queues were added during moxi's history, we found that getting requests off the wait queue sooner and onto the later stage queues was a better approach. We'll discuss these later-stage queues below.
Next, let's discuss how client requests are matched to downstream connections.
The concurrently processed client requests (taken from the head of the wait queue) now need to be matched up with downstream connections to membase/memcached. If the client's request comes with a key (like a SET, DELETE, ADD, INCR, single-key GET), the request's key is hashed to find the right downstream server "host:port:bucket" info. For example, something like — "memcache1:11211:default". If the client's request was a broadcast-style command (like FLUSH_ALL, or a multi-key GET), moxi knows the downstream connections that it needs to acquire.
The downstream conn pool
Next, there's a lookup using those
host:port:bucket identifiers into a
downstream conn pool in order to acquire or reserve the
appropriate downstream conns. There's a downstream conn pool per
thread. Each downstream conn pool is just a hashmap keyed by
host:port:bucket with hash values of a
linked-list of available downstream conns. The max length of any
downstream conn linked list is controlled by moxi's
downstream_conn_max configuration parameter.
By default, for client-side moxi, the
downstream_conn_max value is 0, which means
unlimited length. In the server-side moxi spawned by
configured to be 4.
So, if you've set
downstream_conn_max of 4,
have 4 worker threads, and have 1 bucket, you should see
moxi create a maximum of 4 X 4 X 1 or 16 connections to any
Connecting to a downstream server
If there isn't a downstream conn available, and the
downstream_conn_max wasn't reached, moxi
creates a downstream conn as needed by doing a
connect() and SASL auth as needed.
The connect_timeout and auth_timeout parameters
connect() and SASL auth have
their own configurable timeout parameters, called
auth_timeout, and these are in
milliseconds. For client-side moxi, these have default
values of 0, meaning no connect or auth timeout. For the
server-side moxi spawned by membase, the
connect_timeout is configured to be 400
(milliseconds) and the
configured to be 100 (milliseconds).
The downstream conn queue
downstream_conn_max was reached, then
the request must wait until a downstream conn becomes
available, so the request is placed on a
host:port:bucket queue, which is
called a downstream conn queue. As downstream conns are
released back into the downstream conn pool, they will be
assigned to any requests that are waiting on the downstream
The downstream_conn_queue_timeout parameter
There is another configurable timeout, downstream_conn_queue_timeout, that defines how long a request should stay on the downstream conn queue in milliseconds before timing out. By default, client-side moxi has downstream_conn_queue_timeout of 0 (meaning no timeout). Server-side moxi is configured with a downstream_conn_queue_timeout of 200 milliseconds.
A downstream connection is reserved
Finally, at this point, downstream conn's are matched up for the client's request. If you've configured moxi to track timing histogram statistics, moxi will now get the official start time of the request. moxi now starts asynchronously sending request message bytes to the downstream conn and asynchronously awaits responses.
To turn on timing histogram statistics, use the "time_stats=1" configuration flag. By default, time_stats is 0 or off.
Next, if you've configured a
downstream_timeout, moxi starts a timer for the
request where moxi can limit the time it will spend processing a
request at this point. If the timer fires, moxi will return a
"SERVER_ERROR proxy downstream timeout" back to the client.
For client-side moxi, the
default value is off or 0 milliseconds. In the server-side moxi
processes spawned by Membase, the
downstream_timeout value is configured to 5000
(milliseconds). If moxi sees the timer fire, it will close any
downstream connections that were assigned to the request. Due to
this simple behavior of closing downstream connections on
timeout, having a very short
is not recommended to avoid churn on connection creation,
timeout, closing and reconnecting.
Responses are received
When all responses are received from the downstream servers for a request (or the downstream conn had an error), moxi asynchronously sends those responses to the client's upstream conn. If you've configured moxi to track timing histogram statistics, moxi now tracks the official end time of the request. The downstream conn is now released back to the per-thread downstream conn pool, and another waiting client request (if any) is taken off the downstream conn queue and assigned to use that downstream conn.
At step 6, there's a case where a
attempt might fail. Moxi can be configured to count up the number of
connect() failures for a downstream server,
and will also track the time of the last failing
connect() failure counting, moxi
can be configured to blacklist a server if too many
connect() failures are seen, which is
defined by the connect_max_errors configuration parameter. When more
than connect_max_errors number of
failures are seen, moxi can be configured to temporarily stop making
connect() attempts to that server (or
backoff) for a configured amount of time. The backoff time is
defined via the connect_retry_interval configuration, in
In client-side moxi, connect_max_errors and connect_retry_interval are by default 0, so this backoff/blacklisting behavior is off by default.
In the server-side moxi processes spawned by membase, the connect_max_errors is configured to be 5 and the connect_retry_interval is 30000 (that is, 30 seconds).
If you use connect_max_errors parameter, it should be set greater