I am using the code below. It connects just fine to our shared dev server, but I am trying to connect to a couchbase instance on my local machine. The local connection does not work, even tho the same username and password works to connect via CBQ.
Error:
Can not open bucket { CouchbaseError: Authentication failed. You may have provided an invalid username/password combination
message: ‘Authentication failed. You may have provided an invalid username/password combination’,
code: 2 }
I have tried multiple things
changing the server string to ‘couchbase://localhost’, ‘http://localhost:8091’ and many more variations
adding a blank password behind the cluster.openBucket() statement
using curl to get the saslPassword and using that in the cluster.openBucket() statement. (i get this error: Error: You must not specify a bucket password when using cluster-level authentication.)
Hey @roy.a.rico,
Could you let us know what server version you are using for your application? Our authentication behaviour has changed around the 5.x period, and thus if you are using an earlier version you will need to use bucket authentication, whereas 5.x+ you would use cluster-level authentication.
Cheers, Brett
Hey @roy.a.rico,
It looks like you are using a server with cluster-level authentication, but are attempting to use bucket-level authentication from the SDK. You will need to setup a user on your cluster (or use one of the predefined users, such as Administrator) in your cluster.authenticate call. The users are then granted access to individual buckets through the UI and a password is no longer needed on the cluster.openBucket call.
Cheers, Brett
Thanks for the reply, and sorry for my slow delay. I do have a user created for this bucket and still am having issues. I also ran SDK dr on my local machine. Any more ideas?
$ ./sdk-doctor-macos diagnose couchbase://localhost/ab -u ab -p test1234
|====================================================================|
| ___ ___ _ __ ___ ___ ___ _____ ___ ___ |
| / __| \| |/ /__| \ / _ \ / __|_ _/ _ \| _ \ |
| \__ \ |) | ' <___| |) | (_) | (__ | || (_) | / |
| |___/___/|_|\_\ |___/ \___/ \___| |_| \___/|_|_\ |
| |
|====================================================================|
Note: Diagnostics can only provide accurate results when your cluster
is in a stable state. Active rebalancing and other cluster configuration
changes can cause the output of the doctor to be inconsistent or in the
worst cases, completely incorrect.
13:05:01.540 INFO ▶ Parsing connection string `couchbase://localhost/ab`
13:05:16.734 INFO ▶ Connection string identifies the following CCCP endpoints:
13:05:16.734 INFO ▶ 1. localhost:11210
13:05:16.734 INFO ▶ Connection string identifies the following HTTP endpoints:
13:05:16.734 INFO ▶ 1. localhost:8091
13:05:16.734 INFO ▶ Connection string specifies bucket `ab`
13:05:16.734 WARN ▶ Your connection string specifies only a single host. You should consider adding additional static nodes from your cluster to this list to improve your applications fault-tolerance
13:05:16.734 INFO ▶ Performing DNS lookup for host `localhost`
13:05:16.734 WARN ▶ Bootstrap host `localhost` has more than one single DNS entry associated. While this is not neccessarily an error, it has been known to cause difficult-to-diagnose problems in the future when routing is changed or the cluster layout is updated.
13:05:16.735 INFO ▶ Bootstrap host `localhost` has IPv6 addresses associated. This is only supported in Couchbase Server 5.5 or later, and must be specifically enabled on the cluster.
13:05:16.735 INFO ▶ Attempting to connect to cluster via CCCP
13:05:16.735 INFO ▶ Attempting to fetch config via cccp from `localhost:11210`
13:05:16.737 INFO ▶ Identified the following nodes:
13:05:16.737 INFO ▶ [0] localhost
13:05:16.737 INFO ▶ moxi: 11211, indexScan: 9101, indexStreamMaint: 9105
13:05:16.737 INFO ▶ indexHttps: 19102, kvSSL: 11207, n1ql: 8093
13:05:16.737 INFO ▶ mgmtSSL: 18091, indexAdmin: 9100, capiSSL: 18092
13:05:16.737 INFO ▶ projector: 9999, indexHttp: 9102, indexStreamCatchup: 9104
13:05:16.737 INFO ▶ kv: 11210, n1qlSSL: 18093, capi: 8092
13:05:16.737 INFO ▶ mgmt: 8091, fts: 8094, ftsSSL: 18094
13:05:16.737 INFO ▶ indexStreamInit: 9103
13:05:16.737 INFO ▶ Fetching config from `localhost:8091`
13:05:16.739 INFO ▶ Failed to retreive cluster information (status code: 401)
13:05:16.740 INFO ▶ Successfully connected to KV service at `localhost:11210`
13:05:16.743 INFO ▶ Successfully connected to MGMT service at `localhost:8091`
13:05:16.765 INFO ▶ Successfully connected to CAPI service at `localhost:8092`
13:05:16.766 INFO ▶ Successfully connected to N1QL service at `localhost:8093`
13:05:16.767 INFO ▶ Successfully connected to FTS service at `localhost:8094`
13:05:16.768 INFO ▶ Memd Nop Pinged `localhost:11210` 10 times, 0 errors, 0ms min, 0ms max, 0ms mean
13:05:16.768 INFO ▶ Diagnostics completed
Summary:
[WARN] Your connection string specifies only a single host. You should consider adding additional static nodes from your cluster to this list to improve your applications fault-tolerance
[WARN] Bootstrap host `localhost` has more than one single DNS entry associated. While this is not neccessarily an error, it has been known to cause difficult-to-diagnose problems in the future when routing is changed or the cluster layout is updated.
Found multiple issues, see listing above.
I realize this thread is dated, however, I am having a similar issue connecting to a cluster.
The configuration is identical to our dev-test buckets in a separate cluster which has been authenticated just fine for months. We are attempting to release our production app and formed a production, 3-node cluster it is with this new production cluster we are receiving . Below are the particulars:
Couchbase Cluster Configuration
Couchbase 3-node cluster:(6.0.4)
Amazon Linux 2
4-cores
32gb RAM
300 GB hard drive.
N1QL Query, Search, and Index services are enabled
Bucket name: “xyz”
AWS Lambda deployment configuration
Nodejs 10.x
Identical code base to existing, working stack successfully connecting to dev-test couchbase cluster.
-Baseline testing new production lambda serverless api deployment connecting to known, working dev bucket environment: success! returns data as expected
-baseline query against test bucket: success! returns data as expected
identical query against new production couchbase cluster: fail! returns: **Error: An FTS error occured: {"error":"rest_auth: cbauth.AuthWebCreds, err: Authentication failure","request":{"ctl":{"timeout":75000},"fields":["*"],"indexName":"screened_request","query":{"conjuncts":[{"end":"2020-03-06T05:00:00.000Z","field":"completed","inclusive_end":false,"inclusive_start":true,"start":"2020-03-01T05:00:00.000Z"}]},"size":0},"status":"fail"}**
UPDATE:
I aimed my request at another end-point which exclusively runs an N1QL query and not FTS. I get a more detailed error but what its reporting is a lie:
{ Error: User does not have credentials to run SELECT queries on the xyz bucket. Add role query_select on xyz to allow the query to run.
at /..../node_modules/couchbase/lib/bucket.js:906:17
requestID: 'bc6a646c-8907-4f3b-8d31-aba282f277bd',
code: 13014,
otherErrors: [] }
'{"requestID":"bc6a646c-8907-4f3b-8d31-aba282f277bd","code":13014,"otherErrors":[]}'
Is it possible for you to run the Node.js SDK with an environment variable set: LCB_LOGLEVEL=5? This will print out a large quantity of debug information that will be helpful in diagnosing the specific issue here. Additionally, would you be able to tell me if your lambda functions exhibit this issue at all times, or is it only occasionally or possibly after your application has been frozen by lambda.
Cheers, Brett
All the lambda functions, included these in production run perfectly when connecting to our dev-test cluster. I established that baseline first. The issue appeared only after we updated the connection config to the production cluster. I can even run all 3 AGW lambda locally with Serverless offline, always returns results from dev-test, never prod.
I have pasted the installation script used to create this cluster. Perhaps you will note something in it that may explain the issue. I like to manually configure the rest myself.
#!/bin/bash
echo 'Running startup script...'
adminUsername=administrator
adminPassword=***
version=6.0.4
stackName=couchbase-services
#######################################################
############## Install Couchbase Server ###############
#######################################################
echo "Installing Couchbase Server..."
wget https://packages.couchbase.com/releases/6.0.4/couchbase-server-enterprise-6.0.4-amzn2.x86_64.rpm
rpm --install couchbase-server-enterprise-6.0.4-amzn2.x86_64.rpm
#######################################################
############ Turn Off Transparent Hugepages ###########
#######################################################
echo "Turning off transparent hugepages..."
echo "#!/bin/bash
### BEGIN INIT INFO
# Provides: disable-thp
# Required-Start: $local_fs
# Required-Stop:
# X-Start-Before: couchbase-server
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: Disable THP
# Description: disables Transparent Huge Pages (THP) on boot
### END INIT INFO
echo 'never' > /sys/kernel/mm/transparent_hugepage/enabled
echo 'never' > /sys/kernel/mm/transparent_hugepage/defrag
" > /etc/init.d/disable-thp
chmod 755 /etc/init.d/disable-thp
service disable-thp start
chkconfig disable-thp on
#######################################################
################# Set Swappiness to 0 #################
#######################################################
echo "Setting swappiness to 0..."
sysctl vm.swappiness=0
echo "
# Required for Couchbase
vm.swappiness = 0
" >> /etc/sysctl.conf
#######################################################
##################### Format Disk #####################
#######################################################
echo "Formatting disk..."
DEVICE=/dev/sdk
MOUNTPOINT=/mnt/datadisk
echo "Creating the filesystem."
mkfs -t ext4 ${DEVICE}
echo "Updating fstab"
LINE="${DEVICE} ${MOUNTPOINT} ext4 defaults,nofail 0 2"
echo -e ${LINE} >> /etc/fstab
echo "Mounting the disk"
mkdir $MOUNTPOINT
mount -a
echo "Changing permissions"
chown couchbase $MOUNTPOINT
chgrp couchbase $MOUNTPOINT
#######################################################
################# Configure Couchbase #################
#######################################################
echo "Configuring Couchbase..."
yum -y update
yum -y install jq
@brett19. I set the env var you instructed. Below is the output. Much info. This was the results returned from SLS offline call on my local port 5087. I will sls deploy and run again in AGW lambda and supply those results.
19ms [I5c05bae9] {30829/307} [TRACE] (confmon - L:265) Attempting to retrieve cluster map via FILE
22ms [I5c05bae9] {30829/307} [INFO] (confmon - L:154) Setting new configuration. Received via FILE
22ms [I5c05bae9] {30829/307} [TRACE] (bc_file - L:203) Not writing configuration originating from PHONY or FILE to cache
23ms [I5c05bae9] {30829/307} [INFO] (bootstrap - L:90) Selected network configuration: "(null)"
23ms [I5c05bae9] {30829/307} [TRACE] (http-io - L:249) <ec2-54-221-188-124.compute-1.amazonaws.com:8093> POST http://ec2-54-221-188-124.compute-1.amazonaws.com:8093/query/service. Body=637 bytes
23ms [I5c05bae9] {30829/307} [DEBUG] (lcbio_mgr - L:448) <ec2-54-221-188-124.compute-1.amazonaws.com:8093> (HE=0x10299de80) Creating new connection because none are available in the pool
23ms [I5c05bae9] {30829/307} [TRACE] (lcbio_mgr - L:359) <ec2-54-221-188-124.compute-1.amazonaws.com:8093> (HE=0x10299de80) New pool entry: I=0x10299e660
23ms [I5c05bae9] {30829/307} [INFO] (connection - L:474) <ec2-54-221-188-124.compute-1.amazonaws.com:8093> (SOCK=0000000078a557c4) Starting. Timeout=75000000us
93ms [I5c05bae9] {30829/307} [TRACE] (connection - L:367) <ec2-54-221-188-124.compute-1.amazonaws.com:8093> (SOCK=0000000078a557c4) Received completion handler. Status=0. errno=0 [Undefined error: 0]
93ms [I5c05bae9] {30829/307} [INFO] (connection - L:147) <ec2-54-221-188-124.compute-1.amazonaws.com:8093> (SOCK=0000000078a557c4) Connected established
93ms [I5c05bae9] {30829/307} [INFO] (connection - L:108) <ec2-54-221-188-124.compute-1.amazonaws.com:8093> (SOCK=0000000078a557c4) Couldn't set TCP_NODELAY
93ms [I5c05bae9] {30829/307} [DEBUG] (connection - L:106) <ec2-54-221-188-124.compute-1.amazonaws.com:8093> (SOCK=0000000078a557c4) Successfully set TCP_KEEPALIVE
93ms [I5c05bae9] {30829/307} [DEBUG] (lcbio_mgr - L:318) <ec2-54-221-188-124.compute-1.amazonaws.com:8093> (HE=0x10299de80) Received result for I=0x10299e660,C=0x0; E=0x0
93ms [I5c05bae9] {30829/307} [DEBUG] (lcbio_mgr - L:278) <ec2-54-221-188-124.compute-1.amazonaws.com:8093> (HE=0x10299de80) Assigning R=0x10299e530 SOCKET=0x10299e830
93ms [I5c05bae9] {30829/307} [DEBUG] (ioctx - L:101) <ec2-54-221-188-124.compute-1.amazonaws.com:8093> (CTX=0x102a2bd80,unknown) Pairing with SOCK=0000000078a557c4
150ms [I5c05bae9] {30829/307} [TRACE] (n1ql - L:384) (NR=0x10299cef0) Will retry request. code: 13014, msg: User does not have credentials to run SELECT queries on the dp bucket. Add role query_select on dp to allow the query to run.
150ms [I5c05bae9] {30829/307} [TRACE] (http-io - L:249) <ec2-18-207-241-164.compute-1.amazonaws.com:8093> POST http://ec2-18-207-241-164.compute-1.amazonaws.com:8093/query/service. Body=637 bytes
150ms [I5c05bae9] {30829/307} [DEBUG] (lcbio_mgr - L:448) <ec2-18-207-241-164.compute-1.amazonaws.com:8093> (HE=0x10299f7b0) Creating new connection because none are available in the pool
150ms [I5c05bae9] {30829/307} [TRACE] (lcbio_mgr - L:359) <ec2-18-207-241-164.compute-1.amazonaws.com:8093> (HE=0x10299f7b0) New pool entry: I=0x10299ff90
150ms [I5c05bae9] {30829/307} [INFO] (connection - L:474) <ec2-18-207-241-164.compute-1.amazonaws.com:8093> (SOCK=0000000037409e79) Starting. Timeout=75000000us
174ms [I5c05bae9] {30829/307} [DEBUG] (ioctx - L:151) <ec2-54-221-188-124.compute-1.amazonaws.com:8093> (CTX=0x102a2bd80,mgmt/capi) Destroying context. Pending Writes=0, Entered=true, Socket Refcount=1
174ms [I5c05bae9] {30829/307} [DEBUG] (lcbio_mgr - L:498) <ec2-54-221-188-124.compute-1.amazonaws.com:8093> (HE=0x10299de80) Placing socket back into the pool. I=0x10299e660,C=0x10299e830
211ms [I5c05bae9] {30829/307} [TRACE] (connection - L:367) <ec2-18-207-241-164.compute-1.amazonaws.com:8093> (SOCK=0000000037409e79) Received completion handler. Status=0. errno=0 [Undefined error: 0]
211ms [I5c05bae9] {30829/307} [INFO] (connection - L:147) <ec2-18-207-241-164.compute-1.amazonaws.com:8093> (SOCK=0000000037409e79) Connected established
211ms [I5c05bae9] {30829/307} [INFO] (connection - L:108) <ec2-18-207-241-164.compute-1.amazonaws.com:8093> (SOCK=0000000037409e79) Couldn't set TCP_NODELAY
211ms [I5c05bae9] {30829/307} [DEBUG] (connection - L:106) <ec2-18-207-241-164.compute-1.amazonaws.com:8093> (SOCK=0000000037409e79) Successfully set TCP_KEEPALIVE
211ms [I5c05bae9] {30829/307} [DEBUG] (lcbio_mgr - L:318) <ec2-18-207-241-164.compute-1.amazonaws.com:8093> (HE=0x10299f7b0) Received result for I=0x10299ff90,C=0x0; E=0x0
211ms [I5c05bae9] {30829/307} [DEBUG] (lcbio_mgr - L:278) <ec2-18-207-241-164.compute-1.amazonaws.com:8093> (HE=0x10299f7b0) Assigning R=0x10299fe60 SOCKET=0x1029a0160
211ms [I5c05bae9] {30829/307} [DEBUG] (ioctx - L:101) <ec2-18-207-241-164.compute-1.amazonaws.com:8093> (CTX=0x1029a01c0,unknown) Pairing with SOCK=0000000037409e79
286ms [I5c05bae9] {30829/307} [TRACE] (n1ql - L:384) (NR=0x10299cef0) Will retry request. code: 13014, msg: User does not have credentials to run SELECT queries on the dp bucket. Add role query_select on dp to allow the query to run.
{ Error: User does not have credentials to run SELECT queries on the dp bucket. Add role query_select on dp to allow the query to run.
at /Users/Mac/Documents/Development/projects/denied-persons-api/node_modules/couchbase/lib/bucket.js:906:17
requestID: 'dfeea5b1-fbe0-4b86-972b-7f2ea34fadf5',
code: 13014,
otherErrors: [] }
290ms [I5c05bae9] {30829/307} [DEBUG] (ioctx - L:151) <ec2-18-207-241-164.compute-1.amazonaws.com:8093> (CTX=0x1029a01c0,mgmt/capi) Destroying context. Pending Writes=0, Entered=true, Socket Refcount=1
290ms [I5c05bae9] {30829/307} [DEBUG] (lcbio_mgr - L:498) <ec2-18-207-241-164.compute-1.amazonaws.com:8093> (HE=0x10299f7b0) Placing socket back into the pool. I=0x10299ff90,C=0x1029a0160
'{"requestID":"dfeea5b1-fbe0-4b86-972b-7f2ea34fadf5","code":13014,"otherErrors":[]}'
Not sure if it helps me. We do hold a license if that gives us expedited response. I wouldn’t even ask this but we have fallen several days behind our production release schedule with this issue.
While you are waiting on a response from @brett19 , looking at the response from the log this doesn’t seem to be an authentication issue this looks like an authorization issue.
I do see that you have given Full Administration access to a user account . Is your application (azure lambda ) running under sudo_lambda use account ? Decoding the error looks like the user doesn’t have the necessary RBAC.
User does not have credentials to run SELECT queries on the dp bucket. Add role query_select on dp to allow the query to run
I gave the user full admin to rule out user permission as an issue. The fact the user account has full admin proves the error message is lying about what is really wrong. It is likely and server configuration issue not curable from within couchbase. That is, I suspect it is a machine-level setting inappropriately configured in someway.
I tried doing exactly what you were but my client was cbq (CLI for couchbase) as opposed to a Nodejs application , I set up a user account with admin access just like you had explained. now using cbq i tried connecting to localhost:8093 and executing a query
I got the same error as you did (however I also got the message that I am connected), now when I passed in the credentials while connecting to the query service
localhost:8093 -u=admin -p=password and then executing the query, I was able to get the results.
apparently it does look like somehow the credentials while executing the query is not being passed in.
I had the same success but then executing the query did not work and failed just like yours.
from cbq i passed in the user and password to establish the connection and then the query worked.
I would assume that somehow connection to the query service is missing the credentials or ignoring it or maybe I am missing something here can you use postman or other http inspector to see whats passed in the body when the query service is called ?
I forgot to come back and answer my own question and provide my findings for the benefit of others who may also be experiencing the same issue I did.
My cluster configuration was built from a script which referenced each node by their private IP addresses. As each node was referenced externally by their public IP address, I did not see why this would be an issue as they were all within the same VPC. Different Availability zones but same VPC.
I updated the cluster by removing each node, one-by-one, rebalancing, then, adding each back by their PUBLIC DNS name. This solved the issue.
SHRUG
I would really like to revert back to the private-to-private inter-connectivity if there is a way to accomplish this intelligently because AWS claims the throughput is never subjected to throttling between private IP references. They claim the performance is dramatically improved under large volume loads.