Is there any serious performance drawback of having 10.000 MapReduce Views, while the write/read ratio is 7:3? Let’s say I’d like to create 10 out of the 10.000 MapReduce Views on specific collections. Is there any way to force an incoming record to go through 10 incremental views instead of 10.000?
Each view is backed by it’s own file per node, and there’s a lot of processing that can be amortized if you can consolidate it to a single view. What I might recommend is process it all in the same view logic and use compound keys to have different ranges in the same view. Then you’ll be addressing different parts of the same B+Tree, but not have many of them.
I do have to say, 10,000 is a lot though, so there might be a better solution depending on what you’re trying to do. Maybe you could describe why you think you’ll have so many?