Selecting from array using array index returns empty results. But correct results with USE KEYS

Hi.

I have added one “group” document to the bucket that looks like this

{
  "groupName": "name",
  "members": [{ "name": "username", /* other properties */ }] // 100K elements (for testing)
}

That ‘members’ array will contain a lot of members, so in order for the quick access I’m going to use array index:

CREATE INDEX IDX_group_member_name
ON bucket_name (DISTINCT ARRAY member.name FOR member IN members end, members)

Index is created and all good, but when I execute following query

SELECT member.*
FROM bucket_name
UNNEST members AS member
WHERE member.name = 'username'

nothing is returned.

Explain
{
  "plan": {
    "#operator": "Sequence",
    "~children": [
      {
        "#operator": "DistinctScan",
        "scan": {
          "#operator": "IndexScan2",
          "covers": [
            "cover ((distinct (array (`member`.`name`) for `member` in (`bucket_name`.`members`) end)))",
            "cover ((`bucket_name`.`members`))",
            "cover ((meta(`bucket_name`).`id`))"
          ],
          "index": "IDX_group_member_name",
          "index_id": "318646e3f270ddb0",
          "index_projection": {
            "entry_keys": [
              1
            ],
            "primary_key": true
          },
          "keyspace": "bucket_name",
          "namespace": "default",
          "spans": [
            {
              "exact": true,
              "range": [
                {
                  "high": "\"username\"",
                  "inclusion": 3,
                  "low": "\"username\""
                }
              ]
            }
          ],
          "using": "gsi"
        }
      },
      {
        "#operator": "Parallel",
        "~child": {
          "#operator": "Sequence",
          "~children": [
            {
              "#operator": "Unnest",
              "as": "member",
              "expr": "cover ((`bucket_name`.`members`))"
            }
          ]
        }
      },
      {
        "#operator": "Parallel",
        "~child": {
          "#operator": "Sequence",
          "~children": [
            {
              "#operator": "Filter",
              "condition": "((`member`.`name`) = \"username\")"
            },
            {
              "#operator": "InitialProject",
              "result_terms": [
                {
                  "expr": "`member`",
                  "star": true
                }
              ]
            },
            {
              "#operator": "FinalProject"
            }
          ]
        }
      }
    ]
  },
  "text": "select member.*\r\nfrom bucket_name\r\nunnest members as member\r\nwhere member.name = \"username\""
}

However when I add USE KEYS (“group”) the correct result is returned.
Please, advice.

There is limit on Index ky sie if exceeds that limit will not index . Check indexer.log skipped entries. cc @deepkaran.salooja

Thank you for your response. It seems you are right. Found following line in the indexer.log:

2018-04-19T16:08:36.513+03:00 [Error] Flusher::processUpsert Error indexing Key: <concatenation of large amount of user names, like user1user2…> Error: Encoded secondary array key is too long (> 30720). Skipped.

Do you know if this limitation per document of per bucket? If I’m going to have M groups X N users that will be near 100K the index won’t help me?
Is there anything I can do?

It is per document. Check this post for calculation and details How to read max_array_seckey_size setting (Version: 4.5.1-2844 Community Edition (build-2844))

One thing is remove members from index, So that query uses index and does Fetch (non covered index). If you have 100K elements that may not help, You can index on different field and array filtering can be applied post Fetch.
Also change the data model.

In 5.0 The limits are increased cc @deepkaran.salooja

In 5.0, the limits have been removed. The system will automatically handle large size secondary keys.

Thank you very much guys.
Are you talking about version 5.0 of Enterprise edition? Because I encounter the issue with the Community Edition 5.0.1 build 5003.

Yes, it is supported in 5.0 Enterprise Edition.