I know the question was addressed to Peter - This "answer" is not directly to this question, but (perhaps like Dewsters reply) a related technical branch to the question..
IMHO.. (as in, my opinion only! ;-)
For an electronic musical instrument, particularly (but not exclusively) if recording direct (DI), life is a LOT easier than recording acoustic instruments -
And the theremin should be about the easiest electronic musical instrument to record.. You know what the maximum level will be simply by staying away from the volume loop and doing a quick pitch sweep, set the peak level to whatever you usually set max for recording (anything perhaps as close to 0db as you dare ) - doing this your recording should have the full available dynamic range (16 bit, 24 bit - whatever).
When one has recorded an instrument with maximum bit resolution without clipping (as in, the loudest 'sample' is just a fraction below digital saturation) one has a signal which can be attenuated into the mix with minimal loss of resolution..
Remember that for every 6db reduction into the mix, one is effectively losing 1 bit, so an 18db reduction on a 16 bit signal one ends up with a 13 bit reproduction of the waveform, but if you recorded at -6db, you actually end up with effectively 12 bits (I think this is why I prefer to take my tracks through an analogue mixer, and re-record the mix-down from this mixer - it always sounds better to my ears to do it this way).
I strongly suspect that the whole digital audio scene is one of the most elaborate technical frauds / cons of all time - for years we have been listening to mixes done on early digital desks where one is sometimes getting parts of the mix (the low level background / backing) which has actual resolution sometimes less than 9 bits! .. The reason why bit depth has been steadily increasing as technology advanced isn't because we can hear the difference between say 16 bits and 24 bits - we cant!... But we certainly can hear the difference between 8 bits and 16 bits, and anything below 12 bits can be heard by many people.. 24 bits gives those extra needed 8 bits so that the digital mangling process at mix-down doesn't produce too many signals with resolution below 12 bits...
But IMO, recording full-scale at 16 bits, and mixing down with a good analogue desk, and the results kick the sh*t out of any 24 bit all-digital DAW.
Fred.
I must just say this - the above is based perhaps on out-of-date understanding of digital audio.. I was only heavily involved with digital audio right at the start of the "revolution" - pre 1980.. At that time I was the senior engineer at a large studio and CC duplicating factory (FPA in Wimbledon, now defunct like almost every company I have ever worked for - even the NHS is now defunct, I must be to blame.. )-:
We got one of the first digital recording / duplicating setups in the UK - data was recorded onto video tape (Betamax) and re-recorded onto 1" analogue R-R on a large Studer - This tape was then placed into a loop bin where it was looped at high speed synchronously with a stack of "pancake" recorders - these "pancakes" were spools of cassette tape, which were then run into cassette winders, churning out hundreds of compact cassettes ever hour..
But even on such a system, the quality degradation on digitally sourced content was noticeable - We were all puzzled, and did a full evaluation - What we saw was horrific.. All the low-level content was degraded - and with CC noise levels, this "low level" wasn't really that "low" - really "low" was lost below the noise floor.
After examining the source material and consulting with the digital equipment manufacturer and record company who had commissioned it, the reasons I disclosed above were revealed - the digital masters had been digitally 'engineered' prior to us getting the tape - and the process completely screwed the content. The digital equipment was removed .... I know things have advanced hugely since then - but AFAIK the underlying fundamentals haven't changed.