I use the following line to read a document from a bucket:
defaultCollection.async.get(id).flatMap(_.contentAs[E](codec, tt, tag) ...)(ec)
(E is the entity type)
This line gives me a failure if the document exceeds a certain size.
Failure message:
com.github.plokhotnyuk.jsoniter_scala.core.JsonReaderException: too many set inserts, offset: 0x00003970, buf:
+----------+-------------------------------------------------+------------------+
| | 0 1 2 3 4 5 6 7 8 9 a b c d e f | 0123456789abcdef |
+----------+-------------------------------------------------+------------------+
| 00003950 | 30 30 5d 2c 5b 36 35 35 30 30 2c 2d 31 34 31 30 | 00],[65500,-1410 |
| 00003960 | 30 5d 2c 5b 32 38 35 30 30 2c 2d 31 37 35 30 30 | 0],[28500,-17500 |
| 00003970 | 5d 2c 5b 33 38 33 30 30 2c 2d 31 39 35 30 30 5d | ],[38300,-19500] |
| 00003980 | 2c 5b 36 37 35 30 30 2c 33 35 30 30 5d 2c 5b 33 | ,[67500,3500],[3 |
| 00003990 | 36 35 30 30 2c 2d 32 30 37 30 30 5d 2c 5b 33 37 | 6500,-20700],[37 |
+----------+-------------------------------------------------+------------------+
I used to find the size of the document: It seems to be around 20 KB
SELECT META(t).id, ENCODED_SIZE(t) AS size FROM `{DB_NAME}` AS t WHERE META(t).id == '{id}'
result: “size”: 20000 or so.
I am using Couchbase Server community 6.0.
with Scala SDK 1.0.0-beta.1
. The issue was present in the alpha versions of the sdk too. Note that I do NOT face this issue in Python SDK
.
Would you please investigate this issue?
Thanks!