I understood your question to be “how to load multiple CSVs each with different data into one bucket using a type attribute to differentiate the source”, which is what I attempted to demonstrate. I could have misinterpreted this (if so, sorry!).
You would only update either the CSV prior to loading or the documents loaded with the matching key, assuming you needed an explicit ‘type’ attribute and couldn’t just make use of the meta().id to differentiate between the types. (Having an explicit type field expands indexing options.)
You could have a variation where you load with a known unique key range (e.g. prefix) and update only those (rather than all with a common prefix). e.g. if you load with a date in the key - -g '20210810_allergy_#UUID#' then you could update only those matching a specific load. (Obviously customise to suit the frequency of your data loading.)
(If you’re loading often it may well be simplest to just update the CSV beforehand.)