Couchbase Golang SDK Poor Performance Compared to UI, CBQ

Hello Community,

I’m new to Couchbase Server, in fact, my Couchbase server was installed 2 days ago.
My Hardware:

Couchbase Server Edition: Community
Statement: SELECT col1.* FROM poscore.master.brands col1 LIMIT 10

During simple testing, when using UI or CBQ to run the statement performance was excellent at 0.16 ms. However, when using the latest (v2.6.3) golang SDK performance is around 300 times slower (around 45 ms).

Even weirder if you hit the endpoint 5 times within 2 seconds. The result is as follows:

  • 1st Hit: 6 ms
  • 2nd Hit: 46 ms
  • 3rd Hit: 50 ms
  • 4th Hit: 52 ms
  • 5th Hit: 48 ms
    the 6th and so on will yield the same result.

Now if after the 5th, you pause (for 3 seconds), and then do another 5 hits, the first hit is always performant.

If Golang CouchBase transaction is used, those query performances rise 3x times (around 150-190 ms)

For comparison, Gorm MySQL and Go MongoDB (official), only took 4-5 milliseconds.

I cannot wrap my head around this issue, it seems Golang SDK incurs a heavy performance penalty. do I miss something? Please Help.

Are your client and server installed on the same machine or are you going over a network?

This might help show you where the extra time is being taken: Slow Operations Logging | Couchbase Docs

I get consistently 1.5ms. With both the client and server on my MacBook Pro.
And the kv api is about 15x faster.

 % cat m.go
package main
import (
    "fmt"
    "log"
    "time"
    "github.com/couchbase/gocb/v2"
)
func main() {
    connectionString := "localhost"
    bucketName := "travel-sample"
    username := "Administrator"
    password := "password"
    cluster, err := gocb.Connect("couchbase://"+connectionString, gocb.ClusterOptions{
        Authenticator: gocb.PasswordAuthenticator{
            Username: username,
            Password: password,
        },
    })
    if err != nil { log.Fatal(err) }
    bucket := cluster.Bucket(bucketName)
    err = bucket.WaitUntilReady(5*time.Second, nil)
    if err != nil { log.Fatal(err) }
    inventoryScope := bucket.Scope("inventory")
    for i := 1; i < 100; i++{
        start := time.Now();
        queryResult, err := inventoryScope.Query(
            fmt.Sprintf("SELECT * FROM airline limit 10"),
            &gocb.QueryOptions{Adhoc: true},
        )
        if err != nil { log.Fatal(err) }
        fmt.Println(time.Since(start))
        for queryResult.Next() {
            var result interface{}
            err := queryResult.Row(&result)
            if err != nil { log.Fatal(err) }
            //fmt.Println(result)
        }
        if err := queryResult.Err(); err != nil { log.Fatal(err) }
    }
}
% go run m.go
21.700664ms
1.588754ms
3.103601ms
1.610726ms
1.41382ms
1.373763ms
1.454838ms
1.335919ms
1.337685ms
1.364094ms
1.419223ms
1.388663ms
1.411005ms
1.369731ms
1.297689ms
1.323006ms
1.319284ms
1.321261ms
1.448235ms
1.519019ms
1.462236ms
1.653454ms
2.054914ms
2.051859ms
2.47773ms
1.816781ms
1.604774ms
1.536251ms
1.485672ms
1.431302ms
1.376858ms
1.423803ms
2.636438ms
1.712302ms
1.551904ms
1.475938ms
1.536391ms
1.537221ms
1.61502ms
1.578345ms
1.646147ms
1.495581ms
 % cat kv.go 
package main
import (
    "fmt"
    "log"
    "time"
    "github.com/couchbase/gocb/v2"
)
func main() {
    connectionString := "localhost"
    bucketName := "travel-sample"
    username := "Administrator"
    password := "password"
    cluster, err := gocb.Connect("couchbase://"+connectionString, gocb.ClusterOptions{
        Authenticator: gocb.PasswordAuthenticator{
            Username: username,
            Password: password,
        },
    })
    if err != nil { log.Fatal(err) }
    bucket := cluster.Bucket(bucketName)
    err = bucket.WaitUntilReady(5*time.Second, nil)
    if err != nil { log.Fatal(err) }
    col := bucket.Scope("tenant_agent_00").Collection("users")
    type User struct {
        Name      string   `json:"name"`
        Email     string   `json:"email"`
        Interests []string `json:"interests"`
    }
    _, err = col.Upsert("u:jade",
        User{
            Name:      "Jade",
            Email:     "jade@test-email.com",
            Interests: []string{"Swimming", "Rowing"},
        }, nil)
    if err != nil { log.Fatal(err) }
    for i := 1; i < 100; i++{
        start := time.Now();
        getResult, err := col.Get("u:jade", nil)
        fmt.Println(time.Since(start))
        if err != nil { log.Fatal(err) }
        var inUser User
        err = getResult.Content(&inUser)
        if err != nil { log.Fatal(err) }
    }
}
% go run kv.go
438.664µs
113.07µs
118.375µs
99.645µs
106.893µs
108.606µs
104.222µs
99.098µs
101.487µs
108.586µs
105.012µs
100.001µs
88.396µs
92.102µs
93.935µs
89.627µs
97.634µs
87.601µs
97.598µs
92.886µs
99.578µs
94.411µs
105.713µs
85.135µs
90.514µs
94.57µs
86.087µs
82.294µs
96.531µs
83.369µs
84.096µs
102.906µs
98.934µs
74.943µs
71.068µs
94.671µs
104.428µs
93.858µs
98.387µs
128.158µs

We are dealing with a similar issue. Can you try downgrade gocbcore to version 10.2.1? That helped in our case. We need some more time to investigate and we’ll open an enterprise support ticket then. But in the meantime, maybe it helps also for your problem.

go get -u github.com/couchbase/gocbcore/v10@v10.2.1
go mod vendor
go mod tidy

Hi @pfaessler @miko could you please confirm if you are using prepared statements or not? (i.e. is Adhoc: true set in the query options) The downgrade to v10.2.1 of gocbcore fixing the regression for you is interesting (and not supported!) as GOCBC-1393: Invalidate n1ql cache entries on prepared failures · couchbase/gocbcore@41ffbfe · GitHub went into v10.2.2 so if prepared statements are being used then this could be related.

Are your client and server installed on the same machine or are you going over a network?

The client runs on Windows 10 22H2 (bare metal), while CouchBase (and other DBs) runs inside WSL with Ubuntu 22. no network added latency.

This might help show you where the extra time is being taken: Slow Operations Logging | Couchbase Docs

	tracerOpts := &gocb.ThresholdLoggingOptions{
		Interval:        5 * time.Second,
		SampleSize:      10,
		QueryThreshold:  10 * time.Millisecond,
		SearchThreshold: 10 * time.Millisecond,
	}

Where is this logging output being directed/written to? Since no “path” option, I assume logging output would be shown to the console but nothing there.

I get consistently 1.5ms. With both the client and server on my MacBook Pro.
And the kv api is about 15x faster.

my KV operation has great performance and consistency, unlike Query, Sorry, miss to mention that in the first post.

We are dealing with a similar issue. Can you try downgrade gocbcore to version 10.2.1?

did downgrade to 10.2.1 with 2.6.1 per suggestion, with no performance improvement.

could you please confirm if you are using prepared statements or not? (i.e. is Adhoc: true set in the query options)

I follow Golang’s getting started guide for the test, with ad-hoc set to true. No complex code added yet, since the motivation here is to test CouchBase performance.

Note: Sorry for not being able to post the complete code here, I got a 403 error whenever I add the code or image.

Threshold logging output is logged at INFO level using the logger registered with the SDK. Logging | Couchbase Docs - tldr; gocb.SetLogger(gocb.DefaultStdioLogger()) will log to the console at DEBUG level.

Thanks, @chvck.

Threshold Log (Without Transaction) is as follows:

{
  "query": {
    "total_count": 5,
    "top_requests": [
      {
        "operation_name": "query",
        "total_duration_us": 58921,
        "total_dispatch_duration_us": 58210,
        "last_remote_socket": "127.0.0.1:8093",
        "last_dispatch_duration_us": 58210,
        "operation_id": "b784c4a5-08d5-425c-9a83-8b2323174f2c"
      },
      {
        "operation_name": "query",
        "total_duration_us": 55733,
        "total_dispatch_duration_us": 55043,
        "last_remote_socket": "127.0.0.1:8093",
        "last_dispatch_duration_us": 55043,
        "operation_id": "d8c3c996-fe67-409d-9217-ff083107eb3d"
      },
      {
        "operation_name": "query",
        "total_duration_us": 48680,
        "total_dispatch_duration_us": 48137,
        "last_remote_socket": "127.0.0.1:8093",
        "last_dispatch_duration_us": 48137,
        "operation_id": "cf4d5737-c51b-4ff1-a242-f75e41bd661e"
      },
      {
        "operation_name": "query",
        "total_duration_us": 48069,
        "total_dispatch_duration_us": 47509,
        "last_remote_socket": "127.0.0.1:8093",
        "last_dispatch_duration_us": 47509,
        "operation_id": "29b4eb68-457c-42f9-b0ed-f0403ed3a19d"
      },
      {
        "operation_name": "query",
        "total_duration_us": 47237,
        "total_dispatch_duration_us": 47000,
        "last_remote_socket": "127.0.0.1:8093",
        "last_dispatch_duration_us": 47000,
        "operation_id": "f655f74f-086f-4a79-a36d-5d24652221d6"
      }
    ]
  }
}

Threshold Log (WITH Transaction) is as follows:

{
  "query": {
    "total_count": 10,
    "top_requests": [
      {
        "operation_name": "query",
        "total_duration_us": 61242,
        "total_dispatch_duration_us": 61242,
        "last_remote_socket": "127.0.0.1:8093",
        "last_dispatch_duration_us": 61242,
        "operation_id": "4c6c5aaa-665c-46c3-9204-3cdd888651f8"
      },
      {
        "operation_name": "query",
        "total_duration_us": 58327,
        "total_dispatch_duration_us": 57661,
        "last_remote_socket": "127.0.0.1:8093",
        "last_dispatch_duration_us": 57661,
        "operation_id": "cba136a1-64dc-40db-8c0a-000cd8d694a0"
      },
      {
        "operation_name": "query",
        "total_duration_us": 58162,
        "total_dispatch_duration_us": 57597,
        "last_remote_socket": "127.0.0.1:8093",
        "last_dispatch_duration_us": 57597,
        "operation_id": "64de2b9f-4a20-4265-8df6-819e6e944c68"
      },
      {
        "operation_name": "query",
        "total_duration_us": 52566,
        "total_dispatch_duration_us": 51572,
        "last_remote_socket": "127.0.0.1:8093",
        "last_dispatch_duration_us": 51572,
        "operation_id": "67d24d4b-c337-4ec2-ae2e-cb2f198014e1"
      },
      {
        "operation_name": "query",
        "total_duration_us": 52052,
        "total_dispatch_duration_us": 52052,
        "last_remote_socket": "127.0.0.1:8093",
        "last_dispatch_duration_us": 52052,
        "operation_id": "93478967-d2e3-4147-8e92-a8fee7e7178f"
      },
      {
        "operation_name": "query",
        "total_duration_us": 51317,
        "total_dispatch_duration_us": 51317,
        "last_remote_socket": "127.0.0.1:8093",
        "last_dispatch_duration_us": 51317,
        "operation_id": "66ae69af-4dcb-4504-8d96-b79a39a42acd"
      },
      {
        "operation_name": "query",
        "total_duration_us": 51297,
        "total_dispatch_duration_us": 51297,
        "last_remote_socket": "127.0.0.1:8093",
        "last_dispatch_duration_us": 51297,
        "operation_id": "0c9ba0a8-2400-431c-9315-f6b8f4bd55e7"
      },
      {
        "operation_name": "query",
        "total_duration_us": 51218,
        "total_dispatch_duration_us": 51218,
        "last_remote_socket": "127.0.0.1:8093",
        "last_dispatch_duration_us": 51218,
        "operation_id": "9157b8a4-80e2-47cc-8467-d09c83af1db1"
      },
      {
        "operation_name": "query",
        "total_duration_us": 51059,
        "total_dispatch_duration_us": 51059,
        "last_remote_socket": "127.0.0.1:8093",
        "last_dispatch_duration_us": 51059,
        "operation_id": "fbad927c-6fb4-4304-a6c1-51b485a191ef"
      },
      {
        "operation_name": "query",
        "total_duration_us": 51051,
        "total_dispatch_duration_us": 51051,
        "last_remote_socket": "127.0.0.1:8093",
        "last_dispatch_duration_us": 51051,
        "operation_id": "28e21ba8-7d91-4c04-a801-da5950ea2d4f"
      }
    ]
  }
}

Although the Threshold Log show query performance with transaction took 50 ms, when measured using GRPC Client (request-response) took 150 ms to complete.

Sorry about this, I’m looking into why you might be getting an error but could you perhaps post a gist/github link to the code? It might also make sense at this point to file an issue on the github repo for the go sdk: GitHub - couchbase/gocb: The Couchbase Go SDK

I updated your trust level, can you try reposting the code here?

I just figure out what has been the problem, apparently WSL or Ubuntu (not yet determined which) cause that performance issue. I move CouchBase Server out from the WSL container to Windows Baremetal. Since then, Performance is excellent, took only < 5 ms to respond.

But still, my other DBs run just fine inside the WSL Ubuntu container. but that is not a Golang SDK issue.

My issue is resolved.

If you log metrics meta data of query results (To debug where the issue is)
It gives elapsedTime (Time difference request received to completion by query service) ,
executionTime (time difference request start execution to completion by query service)
These are query service side results.

If you know document keys always use KV API to avoid indexScan/2 hops of data transfer,

CE query service limited by 4 cores max (irrespective of how many cores on the host).

Here the snippet of our code:

n1qlQuery := SELECT content.metadataId FROM seo.internal.queue USE INDEX (idxSeoProcessJobs) WHERE content.date > "" ORDER by content.date DESC LIMIT 2000
timeout := 30

rows, err := s…Query(n1qlQuery, &gocb.QueryOptions{
Timeout: time.Duration(timeout) * time.Second,
Metrics: true,
ScanConsistency: queryConsistency,
NamedParameters: params,
})

With gocbcore 10.2.1 it works, with 10.2.2 and newer we get “EOF” returned for about 1% of the queries. adhoc true/false make no difference. Parameters are empty, and I know they are not needed here, but should anyway work I think.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.