Gluster PVC issue

We are trying to store couchbase data to gluster fuse but seems it not been as of now. But as per the community docs the gluster storage has been listed like it will support. Could anybody from couchbase can help us to resolve the issue

Hello team, any response is well appreciated.

Hi @vishnu.ram,

Would you please provide some more detail about what you’ve tried to do, what you expect to happen, and what’s not working (error messages, etc).

We are using gluster storage for volume provisioning. On our yaml we have vol template. When couchbase comes up it will allocate storage for data volume . The problem is the volume is mounted as fuse.gluster for the data directory. While on console the actual allocated storage is not reflecting showing / partition storage value.

Points to check:
Fuse not supported or not recommended
Gluster-block with couchbase how good at work

Hello team,

If it’s not supported I will go for other option.

Hi

This is a known issue with how the UI calculates and displays the available storage, so although the numbers look wrong, it will still work! I suspect the fix will appear in the Couchbase Server release after 6.5.0. I will post a link once I find it for tracking purposes.

Regards,
Si

No even in the backend couchbase-cli server-info only lists the hdd volumes not the fuse.gluster mounts. But when I check with ebs/other block devices it works well. Seems fuse.gluster networked filesystem is incompatible till the release 6.0 (as per the testing)

Hi CB team,

Only gluster-block is supported with couchbase. Not glusterfs. Please change the document as glusterblock. For better clarity for users

CB team,
Can you give an update on this.

Hi Vishnu,

Thanks for your patience, after looking into this we’ve concluded the current documentation is correct which states Glusterfs is supported, but not gluster-block. The reason for this is because gluster-block cannot be used to reliably recover Pods if kubernetes decides to schedule them on a different node since " gluster-block volumes can only be mounted by one node/client at a time" (using glustefs).

While it’s true that gluster-block seems to be more compatible with couchbase itself when it comes to displaying available storage, this will also be true for glusterfs as we address these issues in couchbase 6.5, and in our testing we’ve verified that the actual usage of glusterfs does in fact work with the couchbase Pods aside from this one issue which doesn’t have any impact on usage of cluster itself.

I don’t see any release stating 6.5 version. IF possible can i get download link .
Please note till 6.0.1 it’s not supported we have tested throughoutly the gluster fuse file system is not showing in couchbase-cli serverinfo and it is piking up the host storage.
In case of testing with block storage it shows us the right info. I also aware block only supports readwriteonce.
.

The info in the couchbase ui is in fact incorrect, and we are aware of this, but you can check that your actual glusterfs disks are being used to store the data if you do a kubectl describe on one of the couchbase pods and look for the volumeMount for the ‘/mnt/data’ path. This means your disks are allocated correctly, but the UI is probably showing you allocations from a different path which is the issue in 6.0.1.
There isn’t a 6.5 release yet so you will need to do this kind of verification until then if your tests require it.

Hi Tommie,
Yes we noticed the mounts inside the containers , but the reflection in UI and backend created a bigger confusion. Thanks for the clarification and we take your post as good to go ahead for production release.

Thanks
Vishnu

Hi @vishnu.ram,

As @tommie mentioned this issue is release noted in Known Issues and Limitations in 1.1.0 release. This is targeted to be fixed in upcoming Couchbase Server release 6.5.0 version in Q3 timeframe.

Just as an aside, if you’re planning production deployment I’m much interested in talking to you to understand your deployment model, Kubernetes version/platform etc. We can also provide some guidelines in-terms of best practices and recommendations. You can contact me directly ani@couchbase.com.

Thanks!
Anil Kumar