Remote cluster creation issue for replication with elasticsearch 5.6.16

Hello again,

I am having serious trouble figuring this out. Spent way to much time on it already. Here is the gist.

{“couchdb”:“Welcome”,“version”:“v4.5.1-108-g45731d9”,“couchbase”:“6.0.0-1693-community”}

ES version 5.6.16

connector version 4.1

I am trying to set up a simple Remote cluster reference just as I did with the plugin and previous versions of ES/CB.

My prior ES version was 5.4 and CB version was 5.1 and I was using the correct associated plugin for those versions. It Worked great.

couchbase.password: ******
couchbase.username: Administrator
couchbase.port: 9091
couchbase.maxConcurrentRequests: 1024
http.cors.enabled: true
http.cors.allow-origin : /http://localhost(:[0-9]+)?/
couchbase.ignoreFailures: true
couchbase.keyFilter: org.elasticsearch.transport.couchbase.capi.RegexKeyFilter
couchbase.keyFilter.type: exclude
couchbase.keyFilter.keyFiltersRegex.Histoty: ^Histo.*$"

Now with the connector ver I cannot figure out how to setup the remote cluster reference port. Ive tried 9300 and adding the settings above to the new yaml and/or changing the toml file.

This is what is open

systemd-r 529 systemd-resolve 13u IPv4 14196 0t0 TCP 127.0.0.53:53 (LISTEN)
cupsd 606 root 6u IPv6 26156 0t0 TCP [::1]:631 (LISTEN)
cupsd 606 root 7u IPv4 26157 0t0 TCP 127.0.0.1:631 (LISTEN)
node 608 kibana 10u IPv4 29903 0t0 TCP 127.0.0.1:5601 (LISTEN)
beam.smp 920 couchbase 7u IPv4 28828 0t0 TCP *:21100 (LISTEN)
epmd 1306 couchbase 3u IPv4 27131 0t0 TCP *:4369 (LISTEN)
epmd 1306 couchbase 4u IPv6 27132 0t0 TCP *:4369 (LISTEN)
beam.smp 1414 couchbase 25u IPv4 31756 0t0 TCP *:21101 (LISTEN)
beam.smp 1414 couchbase 35u IPv4 34136 0t0 TCP *:8091 (LISTEN)
beam.smp 1681 couchbase 8u IPv4 32806 0t0 TCP *:21102 (LISTEN)
beam.smp 1681 couchbase 19u IPv4 34129 0t0 TCP *:8092 (LISTEN)
projector 1917 couchbase 6u IPv6 31487 0t0 TCP *:9999 (LISTEN)
indexer 1933 couchbase 6u IPv6 34215 0t0 TCP *:9101 (LISTEN)
indexer 1933 couchbase 10u IPv6 32007 0t0 TCP *:9100 (LISTEN)
indexer 1933 couchbase 11u IPv6 31515 0t0 TCP *:9102 (LISTEN)
indexer 1933 couchbase 35u IPv6 32025 0t0 TCP *:9105 (LISTEN)
cbq-engin 1951 couchbase 7u IPv6 31494 0t0 TCP *:8093 (LISTEN)
goxdcr 1968 couchbase 9u IPv4 34542 0t0 TCP 127.0.0.1:9998 (LISTEN)
cbft 1983 couchbase 5u IPv6 32938 0t0 TCP *:8094 (LISTEN)
memcached 1995 couchbase 10u IPv4 34187 0t0 TCP *:11210 (LISTEN)
memcached 1995 couchbase 11u IPv6 34188 0t0 TCP *:11210 (LISTEN)
memcached 1995 couchbase 12u IPv4 34191 0t0 TCP *:11209 (LISTEN)
memcached 1995 couchbase 13u IPv6 34192 0t0 TCP *:11209 (LISTEN)
expressvp 2075 root 6u IPv4 33516 0t0 TCP 127.0.0.1:2015 (LISTEN)
expressvp 3241 adtracktiv 5u IPv4 36848 0t0 TCP 127.0.0.1:36785 (LISTEN)
code 17666 adtracktiv 59u IPv4 314061 0t0 TCP 127.0.0.1:32888 (LISTEN)
java 22056 elasticsearch 152u IPv6 408569 0t0 TCP [::1]:9300 (LISTEN)
java 22056 elasticsearch 154u IPv6 410625 0t0 TCP 127.0.0.1:9300 (LISTEN)
java 22056 elasticsearch 175u IPv6 406428 0t0 TCP [::1]:9200 (LISTEN)
java 22056 elasticsearch 180u IPv6 408908 0t0 TCP 127.0.0.1:9200 (LISTEN)
java 22956 adtracktiv 151u IPv6 422450 0t0 TCP *:31415 (LISTEN)

and this is my toml

# Introduction to TOML config file format:
#   https://npf.io/2014/08/intro-to-toml/

# The work of replicating a Couchbase bucket may be distributed between
# several connector worker processes, known collectively as a "group".
#
# The connector name uniquely identifies a group of connector workers.
# All workers in the group must use the same name. The group name
# is required even if there is only one worker.
#
# TIP: Changing the name invalidates the replication status checkpoint.
# To preserve the checkpoint, run the `cbes-checkpoint-backup` command
# before renaming, and `cbes-checkpoint-restore` afterwards.
[group]
  name = 'example-group'

# The simplest way to deploy the connector is "static" mode. In this mode,
# each worker process is independent of the others, and must be manually
# configured to know which partitions it is responsible for.
[group.static]
  # Each member in a static group must have a unique member number.
  memberNumber = 1 # A value from 1 to 'totalMembers', inclusive.

  # The total number of workers in the static group.
  #
  # TIP: To safely add or remove workers from a static group, first stop all
  # running workers, then reconfigure them with the new 'totalMembers' value,
  # and finally start all the workers again.
  totalMembers = 1

# Performance metrics may be written to a log file and/or exposed via HTTP.
[metrics]
  httpPort = 31415 # Positive = fixed, 0 = ephemeral, -1 = disable HTTP server
  logInterval = '1m' # Set to '0m' to disable metrics logging

# Required if secure connections are enabled (SSL/TLS).
[truststore]
  # Use the Java keytool command to create this file and import the root
  # certificate from the Security section of the Couchbase admin console
  # and/or the CA certificate used by Elasticsearch.
  path = 'path/to/truststore'
  pathToPassword = 'secrets/truststore-password.toml'

[couchbase]
  hosts = ['localhost', '127.0.0.1:9300']
  network = 'auto'
  bucket = 'bankable'

  # By default the connector stores replication checkpoint documents
  # in the source bucket. If you'd rather store those documents in a
  # separate existing bucket, put that bucket's name here.
  metadataBucket = ''

  # At a minimum, the Couchbase user must have the "Data DCP Reader"
  # role for the source bucket, and "Data Reader" & "Data Writer"
  # roles for the bucket where replication checkpoints are stored.
  username = 'Administrator'

  # Path to a separate TOML file with a single 'password' key.
  # The base for a relative path is the connector installation directory.
  pathToPassword = 'secrets/couchbase-password.toml'

  # Optionally use SSL/TLS to encrypt the connection between Couchbase and
  # the connector. Requires Couchbase Enterprise Edition, and requires the
  # [truststore] section to be configured.
  secureConnection = false

# Options for the Couchbase Database Change Protocol (DCP).
[couchbase.dcp]
  compression = true
  flowControlBuffer = '128mb'
  persistencePollingInterval = '100ms'

[elasticsearch]
  hosts = ['localhost', '127.0.0.1:9300', '127.0.0.1:9091']
  username = 'elastic'

  # Path to a separate TOML file with a single 'password' key.
  # The base for a relative path is the connector installation directory.
  pathToPassword = 'secrets/elasticsearch-password.toml'

  # Optionally use SSL/TLS to encrypt the connection between Elasticsearch and
  # the connector. Requires the [truststore] section to be configured. See also:
  # https://www.elastic.co/guide/en/elasticsearch/reference/current/configuring-tls.html
  secureConnection = false

[elasticsearch.bulkRequestLimits]
  bytes = '10mb'
  actions = 1000
  timeout = '1m'
  concurrentRequests = 2

[elasticsearch.docStructure]
  # The Elasticsearch document may optionally contain Couchbase metadata
  # (cas, revision, expiry, etc). If present, this will be a top-level field
  # of the Elasticsearch document. If you have configured the connector
  # to put document content at the top level, choose a field name that will
  # not conflict with any document fields. To omit the metadata field,
  # use an empty string.
  metadataFieldName = 'meta'

  # false means the ES document root will have a 'doc' field whose value is
  # the Couchbase document.
  # true means the ES document will be identical to the Couchbase document,
  # with the possible addition of the metadata field.
  documentContentAtTopLevel = false

  # false means ignore Couchbase counter documents.
  # true means replicate them as Object nodes like {"value":<counter>}
  wrapCounters = false

# The values in this section may be overridden by specific type definitions.
[elasticsearch.typeDefaults]
  # Write matching documents to this index.
  # Empty string means "no default".
  index = ''

  # Send matching documents though this pipeline.
  # Empty string means "no pipeline".
  pipeline = ''

  # Assign this type to matching documents.
  # For ES 5.x remove leading underscore!
  typeName = 'doc'

  # If true, ignore matching documents entirely (implies 'ignoreDeletes').
  ignore = false

  # If true, never delete matching documents from Elasticsearch.
  ignoreDeletes = false

# Sample document type definitions for the travel-sample bucket.
# Replace these to match your own data model.
#
# At a minimum, each type must specify a 'regex' (Java regular expression)
# or 'prefix' property. The type config matches any Couchbase document
# whose IDs matches the regex or prefix.
#
# The order of type definitions is significant. If a document ID matches
# more than one type, the assigned type will be the one that occurs first
# in this configuration. If none match, the document will not be indexed.
[[elasticsearch.type]]
  prefix = '_sync:' # Couchbase Sync Gateway metadata
  ignore = true

[[elasticsearch.type]]
  prefix = 'airline_'
  index = 'airlines'
  pipeline = ''

[[elasticsearch.type]]
  # Regex just for example. Matches prefixes "airport_", "seaport_", etc.
  regex = '.*port_.*'
  index = 'airports'

[[elasticsearch.type]]
  # Index can be inferred from document ID by including a capturing group
  # named "index". This example matches IDs that start with one or more
  # characters followed by "::". It directs "user::alice" to index "user",
  # and "foo::bar::123" to index "foo".
  regex = '(?<index>.+?)::.*'

# In the travel-sample data model, a route is the child of an airline.
# Each route document has an `airlineid` field that holds the parent ID.
# The following type definition ensures each route document is stored
# in the same index and shard as its parent airline document.
# For more information about Elasticsearch parent/join fields, see
# https://www.elastic.co/guide/en/elasticsearch/reference/current/parent-join.html
[[elasticsearch.type]]
  prefix = 'route_'
  index = 'airlines'
  routing = '/airlineid' # JSON pointer to the parent ID field.
  ignoreDeletes = true # Must always be true if `routing` is specified.

[[elasticsearch.type]]
  prefix = '' # Empty prefix matches any document ID.
  index = 'etc'

# When Elasticsearch rejects a document (usually due to a type mapping error)
# a rejection log entry document is written to Elasticsearch. The log entry's
# document ID is the ID of the rejected Couchbase document. The log entry's
# content has the following fields:
#   "index"  - (string) name of the index the connector tried to write to
#   "type"   - (string) document type name used for the write attempt
#   "action" - (string) failed action type ("INDEX" or "DELETE")
#   "error"  - (string) error message received from Elasticsearch
[elasticsearch.rejectionLog]
  index = 'cbes-rejects'
  typeName = 'doc' # For ES 5.x remove leading underscore!

“please help me obi wan kenobi your my only hope”

Hi Cain,

Version 4.x of the connector doesn’t pretend to be a remote Couchbase cluster like the 3.x plugin did. Instead it’s completely standalone. In the connector config just plug in the hosts and credentials for Couchbase and Elasticsearch and then fire it up.

[couchbase]
hosts = [‘localhost’, ‘127.0.0.1:9300’]

That 9300 looks suspicious… isn’t that an Elasticsearch port?

If Couchbase and Elasticsearch are both using default ports, you should be able to omit the port numbers in both lists of hosts.

Hi David,

l left the credentials and hosts to default. Connector is started via cbes terminal command. both CB ES are running but when I head into the CB web portal and try and create a replication to my ES index/bucket I get errors. In the past versions using the plugin I would curl 127.0.0.1:9091 when adding the remote cluster then just add the replication to ES. I made sure I created the index prior to starting the replication but it just says “Cant find bucket”. I don’t think the default port which is 8091 is correct. that’s the couchbase port.

Side Note Interestingly in the past my CB bucket name and ES index/bucket were both named the same and it worked. Seems like this is no longer feasible?
I am PUT ing the index via Kabana to create it.

Appreciate your help this seems like a really silly error and probably missing something very simple.

Also not sure if this is helpful or not but here are the metrics from localhost:31415

{
version: "4.0.0",
gauges: {
cbes.backfill: {
value: 1024
},
cbes.backfillEstTimeLeft: {
value: "0s"
},
cbes.backfillRemaining: {
value: 0
},
cbes.connectorVersion: {
value: "4.1.0"
},
cbes.couchbaseVersion: {
value: "6.0.0"
},
cbes.elasticsearchVersion: {
value: "5.6.16"
},
cbes.esWaitMs: {
value: 0
},
cbes.groupMembership: {
value: "1/1"
},
cbes.writeQueue: {
value: 0
},
jvm.attr.name: {
value: "31047@adtracktivDev"
},
jvm.attr.uptime: {
value: 17735
},
jvm.attr.vendor: {
value: "Private Build OpenJDK 64-Bit Server VM 25.222-b10 (1.8)"
},
jvm.buffer.direct.capacity: {
value: 8192
},
jvm.buffer.direct.count: {
value: 2
},
jvm.buffer.direct.used: {
value: 8193
},
jvm.buffer.mapped.capacity: {
value: 0
},
jvm.buffer.mapped.count: {
value: 0
},
jvm.buffer.mapped.used: {
value: 0
},
jvm.gc.PS-MarkSweep.count: {
value: 2
},
jvm.gc.PS-MarkSweep.time: {
value: 119
},
jvm.gc.PS-Scavenge.count: {
value: 6
},
jvm.gc.PS-Scavenge.time: {
value: 48
},
jvm.mem.heap.committed: {
value: 315097088
},
jvm.mem.heap.init: {
value: 262144000
},
jvm.mem.heap.max: {
value: 3715629056
},
jvm.mem.heap.usage: {
value: 0.01687294158138912
},
jvm.mem.heap.used: {
value: 62693592
},
jvm.mem.non-heap.committed: {
value: 61161472
},
jvm.mem.non-heap.init: {
value: 2555904
},
jvm.mem.non-heap.max: {
value: -1
},
jvm.mem.non-heap.usage: {
value: -57896896
},
jvm.mem.non-heap.used: {
value: 57896960
},
jvm.mem.pools.Code-Cache.committed: {
value: 14155776
},
jvm.mem.pools.Code-Cache.init: {
value: 2555904
},
jvm.mem.pools.Code-Cache.max: {
value: 251658240
},
jvm.mem.pools.Code-Cache.usage: {
value: 0.049417877197265626
},
jvm.mem.pools.Code-Cache.used: {
value: 12437184
},
jvm.mem.pools.Compressed-Class-Space.committed: {
value: 5283840
},
jvm.mem.pools.Compressed-Class-Space.init: {
value: 0
},
jvm.mem.pools.Compressed-Class-Space.max: {
value: 1073741824
},
jvm.mem.pools.Compressed-Class-Space.usage: {
value: 0.004606142640113831
},
jvm.mem.pools.Compressed-Class-Space.used: {
value: 4945808
},
jvm.mem.pools.Metaspace.committed: {
value: 41721856
},
jvm.mem.pools.Metaspace.init: {
value: 0
},
jvm.mem.pools.Metaspace.max: {
value: -1
},
jvm.mem.pools.Metaspace.usage: {
value: 0.9712522856126056
},
jvm.mem.pools.Metaspace.used: {
value: 40522448
},
jvm.mem.pools.PS-Eden-Space.committed: {
value: 125829120
},
jvm.mem.pools.PS-Eden-Space.init: {
value: 66060288
},
jvm.mem.pools.PS-Eden-Space.max: {
value: 1365770240
},
jvm.mem.pools.PS-Eden-Space.usage: {
value: 0.029587635472273874
},
jvm.mem.pools.PS-Eden-Space.used: {
value: 40409912
},
jvm.mem.pools.PS-Eden-Space.used-after-gc: {
value: 0
},
jvm.mem.pools.PS-Old-Gen.committed: {
value: 176685056
},
jvm.mem.pools.PS-Old-Gen.init: {
value: 175112192
},
jvm.mem.pools.PS-Old-Gen.max: {
value: 2786590720
},
jvm.mem.pools.PS-Old-Gen.usage: {
value: 0.005969209572333608
},
jvm.mem.pools.PS-Old-Gen.used: {
value: 16633744
},
jvm.mem.pools.PS-Old-Gen.used-after-gc: {
value: 16617360
},
jvm.mem.pools.PS-Survivor-Space.committed: {
value: 12582912
},
jvm.mem.pools.PS-Survivor-Space.init: {
value: 10485760
},
jvm.mem.pools.PS-Survivor-Space.max: {
value: 12582912
},
jvm.mem.pools.PS-Survivor-Space.usage: {
value: 0.4490165710449219
},
jvm.mem.pools.PS-Survivor-Space.used: {
value: 5649936
},
jvm.mem.pools.PS-Survivor-Space.used-after-gc: {
value: 5649936
},
jvm.mem.total.committed: {
value: 376258560
},
jvm.mem.total.init: {
value: 264699904
},
jvm.mem.total.max: {
value: 3715629055
},
jvm.mem.total.used: {
value: 120605040
},
jvm.thread.blocked.count: {
value: 0
},
jvm.thread.count: {
value: 39
},
jvm.thread.daemon.count: {
value: 27
},
jvm.thread.deadlock.count: {
value: 0
},
jvm.thread.deadlocks: {
value: [ ]
},
jvm.thread.new.count: {
value: 0
},
jvm.thread.runnable.count: {
value: 14
},
jvm.thread.terminated.count: {
value: 0
},
jvm.thread.timed_waiting.count: {
value: 15
},
jvm.thread.waiting.count: {
value: 10
}
},
counters: { },
histograms: { },
meters: {
cbes.bulkRetry: {
count: 0,
m15_rate: 0,
m1_rate: 0,
m5_rate: 0,
mean_rate: 0,
units: "events/second"
},
cbes.docRejected: {
count: 0,
m15_rate: 0,
m1_rate: 0,
m5_rate: 0,
mean_rate: 0,
units: "events/second"
},
cbes.docWriteRetry: {
count: 0,
m15_rate: 0,
m1_rate: 0,
m5_rate: 0,
mean_rate: 0,
units: "events/second"
},
cbes.esConnFail: {
count: 0,
m15_rate: 0,
m1_rate: 0,
m5_rate: 0,
mean_rate: 0,
units: "events/second"
},
cbes.rejectionLogFail: {
count: 0,
m15_rate: 0,
m1_rate: 0,
m5_rate: 0,
mean_rate: 0,
units: "events/second"
},
cbes.saveStateFail: {
count: 0,
m15_rate: 0,
m1_rate: 0,
m5_rate: 0,
mean_rate: 0,
units: "events/second"
},
cbes.throughputBytes: {
count: 0,
m15_rate: 0,
m1_rate: 0,
m5_rate: 0,
mean_rate: 0,
units: "events/second"
}
},
timers: {
cbes.bulkIndexPerDoc: {
count: 0,
max: 0,
mean: 0,
min: 0,
p50: 0,
p75: 0,
p95: 0,
p98: 0,
p99: 0,
p999: 0,
stddev: 0,
m15_rate: 0,
m1_rate: 0,
m5_rate: 0,
mean_rate: 0,
duration_units: "milliseconds",
rate_units: "calls/second"
},
cbes.latency: {
count: 0,
max: 0,
mean: 0,
min: 0,
p50: 0,
p75: 0,
p95: 0,
p98: 0,
p99: 0,
p999: 0,
stddev: 0,
m15_rate: 0,
m1_rate: 0,
m5_rate: 0,
mean_rate: 0,
duration_units: "milliseconds",
rate_units: "calls/second"
},
cbes.newIndexReq: {
count: 0,
max: 0,
mean: 0,
min: 0,
p50: 0,
p75: 0,
p95: 0,
p98: 0,
p99: 0,
p999: 0,
stddev: 0,
m15_rate: 0,
m1_rate: 0,
m5_rate: 0,
mean_rate: 0,
duration_units: "milliseconds",
rate_units: "calls/second"
},
cbes.retryDelay: {
count: 0,
max: 0,
mean: 0,
min: 0,
p50: 0,
p75: 0,
p95: 0,
p98: 0,
p99: 0,
p999: 0,
stddev: 0,
m15_rate: 0,
m1_rate: 0,
m5_rate: 0,
mean_rate: 0,
duration_units: "milliseconds",
rate_units: "calls/second"
}
}
}

And here are terminal logs after i run the connector

09:51:25.623 [main] INFO  c.c.c.e.ElasticsearchConnector - Couchbase Elasticsearch Connector version 4.1.0
Reading connector configuration from /home/adtracktiv/couchbase-elasticsearch-connector-4.1.0/config/default-connector.toml
09:51:26.034 [main] INFO  c.c.c.e.ElasticsearchConnector - Read configuration: ConnectorConfig{couchbase=CouchbaseConfig{hosts=[localhost], network=NetworkResolution{name='auto'}, username=Administrator, bucket=bankable, metadataBucket=bankable, secureConnection=false, dcp=DcpConfig{compression=ENABLED, persistencePollingInterval=100ms, flowControlBuffer=128mb}}, elasticsearch=ElasticsearchConfig{hosts=[http://localhost:9200], username=elastic, secureConnection=false, bulkRequest=BulkRequestConfig{maxActions=1000, maxBytes=10mb, concurrentRequests=2, timeout=1m}, docStructure=DocStructureConfig{metadataFieldName=meta, documentContentAtTopLevel=false, wrapCounters=false}, types=[TypeConfig{type=doc, ignore=true, ignoreDeletes=false, matcher=prefix='_sync:'}, TypeConfig{index=airlines, type=doc, ignore=false, ignoreDeletes=false, matcher=prefix='airline_'}, TypeConfig{index=airports, type=doc, ignore=false, ignoreDeletes=false, matcher=regex='.*port_.*'}, TypeConfig{type=doc, ignore=false, ignoreDeletes=false, matcher=regex='(?<index>.+?)::.*'}, TypeConfig{index=airlines, type=doc, routing=/airlineid, ignore=false, ignoreDeletes=true, matcher=prefix='route_'}, TypeConfig{index=etc, type=doc, ignore=false, ignoreDeletes=false, matcher=prefix=''}], rejectLog=RejectLogConfig{index=cbes-rejects, typeName=doc}}, metrics=MetricsConfig{logInterval=1m, httpPort=31415}, group=GroupConfig{name=example-group, staticMembership=1/1}, trustStore=TrustStoreConfig{path=path/to/truststore}}
09:51:26.523 [main] INFO  c.c.c.u.HttpServer - HTTP server listening at http://localhost:31415
09:51:26.523 [main] INFO  c.c.c.e.ElasticsearchConnector - Metrics available at http://localhost:31415/metrics?pretty
09:51:26.813 [main] INFO  c.c.c.e.ElasticsearchConnector - Elasticsearch version 5.6.16
09:51:27.687 [main] INFO  c.c.c.e.ElasticsearchConnector - Couchbase Server version 6.0.0
09:51:27.881 [main] INFO  c.c.c.d.Client - Environment Configuration Used: ClientEnvironment{clusterAt=[HostAndPort{host='localhost', port=8091}], connectionNameGenerator=DefaultConnectionNameGenerator, bucket='bankable', dcpControl=DcpControl{{connection_buffer_size=134217728, set_noop_interval=20, enable_noop=true}}, eventLoopGroup=NioEventLoopGroup, eventLoopGroupIsPrivate=true, poolBuffers=true, bufferAckWatermark=60, connectTimeout=10000, bootstrapTimeout=5000, sslEnabled=false, sslKeystoreFile='null', sslKeystorePassword=false, sslKeystore=null}
09:51:27.884 [main] INFO  c.c.c.d.Client - Connecting to seed nodes and bootstrapping bucket bankable.
09:51:27.908 [nioEventLoopGroup-5-1] INFO  c.c.c.d.t.n.ConfigHandler - Selected network: auto -> <default>
09:51:27.934 [main] INFO  c.c.c.e.ElasticsearchConnector - Bucket has 1024 partitions. Membership = 1/1
09:51:27.992 [nioEventLoopGroup-5-2] INFO  c.c.c.d.c.DcpChannel - Connected to Node /127.0.0.1:11210
09:51:28.398 [main] INFO  c.c.c.d.CheckpointService - Initializing checkpoint service with backfill target seqnos: [1696, 1896, 1612, 147

“Version 4.x of the connector doesn’t pretend to be a remote Couchbase cluster like the 3.x plugin did. Instead it’s completely standalone.”

Does this mean I do not need to setup the replication via CB web portal XDCR tab?

That’s correct.

Have you looked in your Elasticsearch index? The fact that cbes.backfill is 1024 and cbes.backfillRemaining is 0 makes me think the connector might already have copied over those 1024 documents from Couchbase to Elasticsearch.

If not, take another look at the Getting Started documentation for 4.x. It includes all the steps necessary to set up the 4.x connector.

Ok so I found the issue. I didn’t understand that the connector now replicates the documents directly without the need of the xdcr replication web portal. I also wasn’t seeing any data in my created ES index because I didn’t change this setting in the connector.toml config.

[[elasticsearch.type]]
  prefix = '' # Empty prefix matches any document ID.
  index = 'bankable'  //changed from 'etc' => 'myindexname'

so all the documents were being thrown into this newly generated etc index. Whew glad that’s over.

Thanks for the help david.

1 Like