Memory size for couchbase and mongo comparison

I need to compare the performance between couchbase and mongo db in single node
especially for business intelligence queries which include join and aggregation between buckets or collection
but
i have a question , how much is the memory space which should be assign for database buckets
my laptop with 16 gb ram and the dataset which i will examin is called covid with buckets details :

Allergies 24MB

Careplans 40.1MB

Conditions 68MB

Devices 23.3MB

Encounters 258MB

imaging_studies 24.8MB

Immunizations 28.9MB

Medications 314MB

Observations 420MB

Organizations 24.5MB

Patients 30.5MB

payer_transitions 36.6MB

Payers 19.7MB

Procedures 68MB

Providers 37.8MB

Supplies 81.7MB

We might need a bit more info on the type of queries you’re planning to run to offer a more specific recommendation. I’ve pointed a couple of other folks to this thread.

Could you be a bit more specific about your goal? Is it to have a large set of data (approximately as described) and run it on the specific system (your laptop) as efficiently as possible with ad-hoc queries that may have joins and aggregations? Or is it a fixed set of queries with just different parameters each time?

One thing I would recommend for certain is looking to 7.0’s (currently in Beta) scopes and collections for the different document types. It probably makes sense to use a single bucket, with collections for each type. Depending on your queries, you’ll likely want to have some memory for indexes.

1 Like

queries are like

Merge the COVID-19 conditions with the patients, Period of time every covid patient connected to ventilator, Total cost for every covid patient ,
number and total cost of immunization for every patient
which include join and aggrgation

or more simple queries
like Number of co-occurring conditions ,Number of conducted encounters according to encounter class,Count of non survivor,order allergies according to co occurrence

the main goal is to invetigate the behavior of the two databases for this ad-hoc queries that may have joins and aggregations efficiently as possible

on the my laptop system which has 16 gb ram and is core i 7 ten generation with gtx 1660 ti

the total size after loading into couchbase is approximatly 1500 mb

the second step is to invetigate their behavio for atransaction that may involve this ad hoc queries with write statement according to consistency level

my idea is aroud how document databases handle read heavy workload which may invilve some write with and without indexing

Any advice would be of value
anothe question ,im confused with the amount of ram which i shoud specify for buckest which i prviously mention

queries are like

Merge the COVID-19 conditions with the patients, Period of time every covid patient connected to ventilator, Total cost for every covid patient ,
number and total cost of immunization for every patient
which include join and aggrgation

or more simple queries
like Number of co-occurring conditions ,Number of conducted encounters according to encounter class,Count of non survivor,order allergies according to co occurrence

the main goal is to invetigate the behavior of the two databases for this ad-hoc queries that may have joins and aggregations efficiently as possible

on the my laptop system which has 16 gb ram and is core i 7 ten generation with gtx 1660 ti

the total size after loading into couchbase is approximatly 1500 mb

the second step is to invetigate their behavio for atransaction that may involve this ad hoc queries with write statement according to consistency level

my idea is aroud how document databases handle read heavy workload which may invilve some write with and without indexing

Any advice would be of value
anothe question ,im confused with the amount of ram which i shoud specify for buckest which i prviously mention