Why do I get full disk warning when Dashboard shows 2% use of data-disk?

I get a warning “Approaching full disk warning. Usage of disk “/” on node “” is around 95%.”

However, the Dashboard shows minimal use of the data-disk.

Data Service Disk usable free space (188 GB)
in use by couchbase (3.13 GB)other data (4.7 GB) free (188 GB)

I have a single-node Enterprise Edition 6.0.1 build 2037, with a separate 180GB disk used for data.

ls confirms that the data is indeed on that larger disk:

/mnt/disks/disk-1$ sudo ls couchbase/var/lib/couchbase/data
@2i @analytics @fts @indexes fromds metricsBucket

In fact, df seems to show that none of the partitions is especially full.

Why am I getting this warning?

Filesystem      Size  Used Avail Use% Mounted on
udev            7.1G     0  7.1G   0% /dev
tmpfs           1.5G  912K  1.5G   1% /run
/dev/sda1       9.6G  3.3G  6.3G  35% /
tmpfs           7.1G     0  7.1G   0% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           7.1G     0  7.1G   0% /sys/fs/cgroup
/dev/loop0       59M   59M     0 100% /snap/google-cloud-sdk/79
/dev/loop2       58M   58M     0 100% /snap/google-cloud-sdk/78
/dev/loop1       90M   90M     0 100% /snap/core/6818
/dev/loop3       59M   59M     0 100% /snap/google-cloud-sdk/80
/dev/loop4       90M   90M     0 100% /snap/core/6673
/dev/sda15      105M  3.6M  101M   4% /boot/efi
/dev/sdb        196G  6.8G  190G   4% /mnt/disks/disk-1
tmpfs           1.5G     0  1.5G   0% /run/user/1008```

There is free space on all disks, so the question is unresolved.

But another side note. The “Query Temp Disk Path” is /opt/couchbase/herevar/lib/couchbase/tmp, which is on my base system disk, not the 185 GB extra disk… I was unable to change that, however, as he Settings page says “total quota (14910MB) exceeds the maximum allowed quota (13505MB) on node ‘ns_1@’” and will not let me reduce or increase the memory – so the Save button is not enabled.

Have you found a solution for this? I am having the same issue. Appears to be plenty of space but getting this error. Now I am getting write failures for the same reason.

For anyone else experiencing this:

In order to finally resolve this by executing a series of linux commands via ssh.

** Summary of commands:

ssh -i “your.own.pem” ec2-user@ec2-1-222-111-99.compute-1.amazonaws.com

  1. sudo growpart /dev/xvda 1
  2. lsblk
  3. mount | grep xvda1
  4. lsblk
  5. df -h
  6. mount | grep xvda1
  7. sudo resize2fs /dev/xvda1
  8. df -h