Bucket stats using Java sdk 3.1

I am trying to get bucket stats from Couchbase server using java api . I can get currently using curl to get that

curl -u [admin]:[password]
http://[localhost]:8091/pools/default/buckets/[bucket-name]

But I want to use Java api to the same.

Hi Gourav,

There’s a a RawManager API for making HTTP requests to the Manager service (port 8091 / 18091).

CAVEAT: As of 3.1 it’s an “uncommitted” API, meaning it can change without notice in a patch version.

Here’s some sample code that uses RawManager to fetch the URL you’re interested in:

static Mono<JsonObject> bucketStats(Cluster cluster, String bucketName) {
  String uri = "/pools/default/buckets/" + urlEncode(bucketName);
  return RawManager.call(cluster, RawManagerRequest.get(ServiceType.MANAGER, uri))
      .map(response -> {
        if (response.httpStatus() == 404) {
          throw BucketNotFoundException.forBucket(bucketName);
        }
        return response.contentAs(JsonObject.class);
      });
}

static String urlEncode(String s) {
  try {
    return URLEncoder.encode(s, StandardCharsets.UTF_8.name())
        .replace("+", "%20"); // Make sure spaces are encoded as "%20"
    // so the result can be used in path components and with "application/x-www-form-urlencoded"
  } catch (UnsupportedEncodingException inconceivable) {
    throw new AssertionError("UTF-8 not supported", inconceivable);
  }
}

Thanks,
David

1 Like

thanks for the reply ,I am able to get the response, can you please also share api for creation of bucket

If you’re reading this in the future, the Couchbase Java SDK’s BucketManager API documentation might be useful. Meanwhile, take a look at the Javadoc for Cluster.buckets(), BucketManager, and BucketSettings.

Cluster cluster = ...
cluster.buckets().createBucket(
    BucketSettings.create("bucketName")
        .ramQuotaMB(100));

Thanks,
David

Hi David ,
This method always fails with this error

com.couchbase.client.core.error.CouchbaseException: {“errors”:{“ramQuotaMB”:“RAM quota specified is too large to be provisioned into this cluster.”,“maxTTL”:“Max TTL is supported in enterprise edition only”,“compressionMode”:“Compression mode is supported in enterprise edition only”,“replicaNumber”:“Warning: you do not have enough data servers to support this number of replicas.”},“summaries”:{“ramSummary”:{“total”:8566865920,“otherBuckets”:8566865920,“nodesCount”:1,“perNodeMegs”:100,“thisAlloc”:104857600,“thisUsed”:0,“free”:-104857600},“hddSummary”:{“total”:1000240963584,“otherData”:20003149897,“otherBuckets”:1669374,“thisUsed”:0,“free”:980236144313}}}

Hi Gourav,

There are several errors/warnings here. Let’s look at each one separately.

  • RAM quota specified is too large to be provisioned into this cluster

This is just what it sounds like. The Couchbase cluster’s Data (K/V) service hasn’t been allocated enough RAM to accommodate the new bucket. You can read more about memory allocation here: Memory | Couchbase Docs

You can lower the ram quota for the new bucket (100 mb is the minimum) or allocate more memory to the data service. In the web admin console, go to “Settings > General > Memory Quota”. There’s also a REST API for managing memory allocation.

  • Warning: you do not have enough data servers to support this number of replicas.

BucketSettings.numReplicas defaults to 1. To avoid this warning, explicitly set numReplicas to 0 or add another Couchbase node running the Data (K/V) service.

  • Max TTL is supported in enterprise edition only
  • Compression mode is supported in enterprise edition only

These two errors are due to a bug in the SDK which was fixed in version 3.1.6. If you still see them after upgrading to 3.1.6, we definitely want to hear about it.

Thanks,
David

1 Like

It worked with 3.1.6 version and setting numReplicas to 0 .

1 Like

Hi David,

I have a usecase where I want to get the disk size used by partition column(eg type) in bucket. I want to use the distribution .
Eg like below given

  • type = "route"72.6%

  • type = "landmark"11.1%

  • type = "hotel"2.3%

  • type = "airport"4.5%

I don’t know how to do this, but I bet if you started a new forum thread in the N1QL category you’d get some interesting responses.

Hi David,

Need one more help. I am using couchbase java sdk to get data from couchbase bucket. But it is getting TIMED OUT. Attaching the code below

final QueryResult result = cluster.query("select " + type + " from " + bucket + " group by " + type);

List<Map<String, Object>> list = new ArrayList<Map<String, Object>>();

for (JsonObject row : result.rowsAsObject()) {
Map<String, Object> map = null ;

map = new ObjectMapper().readValue(row.toString(), new TypeReference<Map<String, Object>>() {

});

list.add(map);

}

return list;

Error
com.couchbase.client.core.error.AmbiguousTimeoutException: QueryRequest, Reason: TIMEOUT
at com.couchbase.client.java.AsyncUtils.block(AsyncUtils.java:51)
at com.couchbase.client.java.Cluster.query(Cluster.java:393)
at com.couchbase.client.java.Cluster.query(Cluster.java:380)

Let me know how I can improve my code.

Hi Gourav,

I am using couchbase java sdk to get data from couchbase bucket. But it is getting TIMED OUT.

Are you getting a timeout exception from the SDK (indicating the server is taking too long to run the query), or is your Java code just taking too long to run? If it’s a timeout exception, I would recommend starting a new thread in the N1QL forum so the experts see it and can help tune the query.

If your question is how to make the Java code run faster, I can offer some tips.

  • Create a single ObjectMapper and reuse it. A Jackson ObjectMapper is thread-safe after configuration.

  • OR… instead of using your own ObjectMapper, let the SDK convert the result rows to a Map for you:

List<Map<String, Object>> list = result.rowsAs(
    new TypeRef<Map<String, Object>>() {}
);
  • The code you shared uses the blocking Query API, and stores the entire result set in memory. If the result set is very large, it may be more efficient to use the Reactive API to process each result row as it arrives from the server. See Async and Reactive APIs | Couchbase Docs

EDIT: Sorry, I failed to notice you already shared the stack trace for the AmbiguousTimeoutException. I would definitely post in the N1QL forum to see if it’s possible to optimize the query.

Thanks,
David