Whether you’re combining data from two different data sources, have multiple purchases from the same customer or just entered the same data in a web form twice, it seems like everyone faces the problem of duplicate data at one point or the other.

In this blog post, we'll look at using views in Couchbase Server 2.0 to find matching fields among documents and retain the non duplicate documents. For the sake of this example, assume each document has three common user specified fields – first_name, last_name, postal_code. Using the ruby client for Couchbase Server and the faker ruby gem, you can build a simple data generator to load some sample duplicate data into Couchbase. To use ruby as a programming language with Couchbase, you should download the Ruby SDK here.


Here is an execution sample:


$ ruby ./generate.rb –help

Usage: generate.rb [options]
   -h, –hostname HOSTNAME           Hostname to connect to (default:
   -u, –user USERNAME               Username to log with (default: none)
   -p, –passwd PASSWORD            Password to log with (default: none)
   -b, –bucket NAME                 Name of the bucket to connect to (default: default)
   -t, –total-records NUM           The total number of the records to generate (default: 10000)
   -d, –duplicate-rate NUM          Each NUM-th record will be duplicate (default: 30)
   -?, –help                        Show this message

$ ruby ./generate.rb -t 1000 -d 5
     1000 / 1000


Each document in Couchbase has an user specified key which is accessible as meta.id in the map function of the view. In Figure 1 below, there are multiple documents loaded into Couchbase Server using the data generator client above.

Step 1

Write a custom map function that emits the document ID (meta.id) of all the documents if the a particular duplicate pattern matches (first_name, last_name, postal_code in this case).

function (doc, meta) {

      emit([doc.first_name + '-' + doc.last_name + '-' +  doc.postal_code], meta.id);


The map function defines when two documents are duplicates.  According to the map function defined above, two documents are duplicate when the first name, last name and postal code match. We use ‘-’ so that we prevent aliasing of the data when we concatenate the first name, last name and the postal code.

Step 2

The reduce function looks like –

function (keys, values, rereduce) {

  if (rereduce) {
    var res = [];
    for (var i = 0; i < values.length; i++){
      res = res.concat(values[i])
    return res;
  } else {
    return values;

After grouping, if there are more than one meta.id values, we concatenate them to get a list of meta.id's refering to a duplicate document.

Step 3

The core part of the data cleaner is written in Ruby.
require 'couchbase'
connection = Couchbase.connect(options)
ddoc = connection.design_docs[options[:design_document]]
view = ddoc.send(options[:view])
connection.run do
 view.each(:group => true) do |doc|
   dup_num = doc.value.size
   if dup_num > 1
      puts “left doc #{doc.value[0]}, “
      # delete documents from second to last
      puts “removed #{dup_num} duplicate(s)”
Connect to Couchbase Server and query the view. The value field is an array of meta.id’s that correspond to duplicate documents (matching first name, last name and postal code). If the array size is greater than 1, we delete all the documents except the one corresponding to the last meta.id.

If the number of meta.id’s in the value array is greater than 2, there are duplicate documents corresponding to that key. As shown in the figure above id19 and id20 are duplicate documents.

The output of the data cleaner script looks like –
As shown in the figure below, duplicate documents are now eliminated.
Thanks to Sergey for putting together the ruby code.


Posted by Don Pinto, Principal Product Manager, Couchbase

Don Pinto is a Principal Product Manager at Couchbase and is currently focused on advancing the capabilities of Couchbase Server. He is extremely passionate about data technology, and in the past has authored several articles on Couchbase Server including technical blogs and white papers. Prior to joining Couchbase, Don spent several years at IBM where he maintained the role of software developer in the DB2 information management group and most recently as a program manager on the SQL Server team at Microsoft. Don holds a master's degree in computer science and a bachelor's in computer engineering from the University of Toronto, Canada.


  1. It gives me a \”Reduction too large\” error.

    1. When I get rid of the reduce code chunk then the error disappears. But I suppose I need that reduce code…

  2. Is it possible to do this in N1QL? It should be faster then from client.

    1. Yes it’d certainly be possible to do something similar with N1QL in 4.0 and later. This blog was written for 2.0 originally. That said, Couchbase is deployed as a distributed system, so a N1QL procedure running would perform the same as a client would. It really is a client to the underlying data. You get a benefit in some cases by running the query service co-located with the data, but you can certainly do that with other programs too.

Leave a reply