Get all indexes requires `collectionName` instead of `bucketName`

Below parameter naming is very misleading:

Debugging the underlying query reveals that the user has to provide a collection name instead of a bucket name.

Hello @zoltan.zvara , assuming you are using SDK 3.0 lot of operations are at collection level than being at bucket level

1 Like

@zoltan.zvara the underlying query maps to “SELECT idx.* FROM system:indexes AS idx WHERE keyspace_id = …” And a keyspace can simply be a bucket name, as detailed here So you can supply a bucket name there, which is something that many of our tests do - is that not working for you?

1 Like

@graham.pople @AV25242 it works, I was just pointing out that the SDK 3.0 documentation and API states that bucket is required at many places, while it asks for a collection.

The other day while upgrading DCP, I noticed that a Java DCP Client can be configured with scope and collection, but the collection must be provided as “scope.collection”, not the collection name itself, whereas the SDK 3.0 API. Also the DCP client states that a “scope” and be opened as a stream, but I suspect it would not work, because otherwise in the SDK there is no way to operate on a scope: one also has to specify a collection.

Anyway, I could continue, I’m just pointing out that the documentation and API must be polished a great deal by fixing naming and adding examples for more clarification.

I hope I will be able to do so with the upcoming Spark Connector, I would be happy to help.

1 Like