Can cbpillowfight use existing dataset?

We have a huge dataset already populated in the CB server. Can cbpillowfight make use of that dataset instead of having to create a new dataset for it?

We have 10m docs like this:
abc-3219290000 to abc-3229290000

Can we make use of the -p (prefix) flag and the -I flag and make it use this dataset? OR may be some other way?

Thanks

This might help somebody looking for answer to a similar question.

We got to know that at this point, pillowfight cannot work on an existing dataset. It has to generate its own dataset.

With tests, we also figured out that it creates a key as follows:
prefix + 20 zeroes

So if the prefix (-p) is abc, then the key for the first doc will be:
abc00000000000000000001

If the --start-at is used then it will start with that offset. for e.g. if the --start-at is 15, then the key for the first doc will be:
abc00000000000000000016

And accordingly, the reads & writes that pillowfight will perform to simulate a workload, will look for these keys only.

HTH