We are migrating to Membase from EHCache, and we are trying to address some performance concerns with Membase.
The application is a content website, where data needs to be pulled against thousand's of keys.
EHCache worked well, because the cache is within the same same server - it is not distributed.
With Membase, since the requests are going out to the Membase server to get the data, it is causing latency.
We have addressed a most of it by doing a multi-get using asyncGetBulk(), but this is still taking time when there are large number of keys.
1. I am using 20 net.spy.memcached.MemcachedClient instances, and running the application on two servers, so a total of 40 MemcachedClient's. Is there a scale to decide the number of clients to be used? If yes, can you please explain?
2. What is the advisable number of keys an asyncGetBulk() can effectively handle?
3. Is there any other tuning strategies I can apply to speed up the Cache-Gets?
Server: Membase 1.6.5
Client: memcached-2.6.jar (Spymemcached)
Number of Membase Servers: 2 (4GB each)
Local Moxi: None
Data buckets : One (default)
Constructor used to create clients:
MemcachedClient(ConnectionFactory cf, List<InetSocketAddress> addrs)
The Multi-Get method:
Future<Map<String, Object>> f; f = getCache().asyncGetBulk(keys,serializingTranscoder); Object o = f.get(2, TimeUnit.SECONDS);
Please let me know if I am missing any information, which you are looking for to respond to the questions.