@ogrdsnielsen as gerald said I think you won’t get around a simple script, or in bash you split it up into multiple docs first. That said if you use a language where we have official SDKs you’d be better off using KV directly since it gives you better performance on those kind of operations (insert where you know the key and the value)
Thanq @avsej…But we are already using the syntax for splitting the jsons into multiple files.
But there was some problem in unix box like limiting the number of files.
Suppose for example our table has data for some 20million records and when we are running the script for generating the jsons only 4million+ jsons are getting created in unix box.Is there any threshold limit in unix?
Is there any solution to overcome that and generate 20million jsons in the unix box.
In this case, why don’t you use regular SDK to load the documents from that huge file? You might use streaming JSON parser (which does not load full file into memory to parse) and then upsert all the docs.