In the dynamic configuration world, configuration, whether the first time or whether a re-configuration, uses the work queues and the same code paths. That is, the first time is just an edge case of re-configuration.
Also, to keep locking to a minimum, each thread has its own copy of the configuration data structures and strings. Each copy of the configuration is associated with a version number (called "config_ver"), to allow for faster comparisons. That is, it's fast to test that configuration hasn't changed by just comparing numbers.
In the dynamic configuration world, it all starts with libconflate.
Libconflate has its own, dedicated thread for making HTTP/REST calls to retrieve a cluster configuration and processing JSON responses. When the libconflate receives a proper REST response, it invokes moxi callbacks (so the dynamic re-configuration callbacks are happening on libconflate's dedicated thread).
Those moxi callbacks are implemented by the agent_config.c file in moxi. The agent_config code next sends a synchronous work task (with the latest, successfully parsed config info to moxi's main thread.
Note that there may be more than one proxy config in a single "re-configuration" work task. The reason is that each JSON configuration message, for simplicity, includes all buckets, even if most of the buckets haven't changed. For example, if there are 55 buckets, the JSON message will have 55 bucket configuration details in it.
Later, if another bucket is added, for example, the next JSON reconfiguration message will have 56 buckets in it (as opposed to just sending a delta).
This approach is wasteful on network bandwidth (during the infrequent re-configuration episodes), but allows for code simplicity. In this simplicity, the first-time pathway, re-configuration pathway, and approach to handling restarts are all the same, since everything that moxi needs to operate will appear in a single JSON message.
The main thread tracks a list of active proxies. Each active proxy has a name, such as "default". In a multi-tenant deployment, this proxy name is the same as a bucket name. During the re-configuration work task, the main thread walks through its active proxies list and updates each proxy data structure
appropriately. Also, new proxies are created and no longer unlisted proxies are deleted, as necessary.
While the main thread is responsible for proxy data structures (and the linked list of proxy structures), each worker thread has its own copy or snapshot of this information (plus more thread-specific data), called a proxy_td data structure. This is short for "proxy thread data", and is often abbreviated as "ptd" in the code.
A proxy keeps track of its proxy_td's (one proxy_td per worker thread).
A worker thread can freely mutate its proxy_td structure, but never touch the proxy_td's of other work threads. So, a worker thread does not need locks to access its own proxy_td.
A worker thread (and the main thread, too) must use locking before accessing the shared, mutatable fields on its "parent" proxy data structure. Some proxy data structure fields are inherently read-only and static, so they don't need locking. The code comments (cproxy.h) try to clearly specify which fields are immutable and lockless.
Between each request, each worker thread grabs a very short lock on its parent's proxy data structure to compare config_ver numbers. If the config_ver numbers match, the worker thread knows that the proxy's configuration remains the same, and the request can proceed.
If the numbers are different, the worker thread knows a re-configuration has occured, and the worker thread will remove cached information and update its proxy_td copy appropriately.