Let's Design and Build a (mostly) Digital Theremin!

Posted: 10/2/2020 5:27:09 PM
pitts8rh

From: Minnesota USA

Joined: 11/27/2015

Which reminds me: maybe consider a separate ESD ground for your encoder boards, rather than lumping it together with digital ground? - Dewster

Sure, I can do if the present plane is tied to digital ground.  I don't remember doing that intentionally - I just keep the surrounding copper there for etching purposes, usually unconnected unless I need ground return paths for the signals.

Maybe you could remind me of this when the time comes?  I started a list of changes for the next board build, and then I forgot where I put it.  Typical. 

Posted: 10/2/2020 5:47:50 PM
dewster

From: Northern NJ, USA

Joined: 2/17/2012

"Doesn't this filament have metal powder inside? (If so, it could interfere with LC)."  - Buggins

I don't believe so, but good question!

As a quick experiment:

1. I took my plastic hand position measuring rod and brought it near the pitch plate, which gave ~1/2 octave increase (kind of a surprise). 

2. Next I taped a piece of blue masking tape to it and got about the same. 

3. Next I taped a short piece of PVC pipe to it and got ~1 octave.

4. Next I taped gray PLA, black PETG, and space gray PETG parts, and they all gave me ~1 octave.

I think the C interaction here is due to relative permittivity of the plastics (IIRC PVC ~3) not conduction.  Increasing the permittivity over that of air (~1) will increase C.

Posted: 10/2/2020 6:43:41 PM
pitts8rh

From: Minnesota USA

Joined: 11/27/2015

FWIW, I've tested printed objects using both the black and silk/metallic filaments in the pitch field of a theremin and also have not seen any evidence of any effect that couldn't at least partially be attributed to the relative permittivity as Dewster describes. Printed parts seem to affect the pitch field about the same as other plastics. Early on I was worried about the possible presence of carbon black as a pigment for my black D-Lev II pitch arms, but after testing I see nothing to be concerned about.  Both PLA and PETG can absorb moisture too, but until a real problem shows up I'm using it.  The metallics or silk colors do not use metal for the sheen (although metal-loaded filaments can be purchased for special uses) as this would be abrasive to the nozzles and expensive.  I know some metallic paints use mica powder for a shine, but I have no idea what the ingredient is in filament.

Posted: 10/2/2020 8:22:38 PM
tinkeringdude

From: Germany

Joined: 8/30/2014

Fate decided to conspire with the >= 2 SW devs here unabashedly vowing for SCM, to make said class of tools seem more enticing.

As it so happened, I had a pleasantly brief little git adventure today.

There was a bug I discovered, which is a thing that's not well covered by tests or looked at often: A LED status display that worked fine, did weird things only for one particular device state now. And since this was discovered by chance, it was something probably lingering for a while, no recent changes.
While it was not very important issue by what was visible itself, I fortunately had a suscpicion that it still may have bigger impact & should be investigated.
Because this less important feature is connected to a more important one: a small-ish MCU (by today's standards) where I needed a small amount of parallelism but where an RTOS may be overkill, there is one kHz interrupt that does two things: Control this LED status thing - but also monitoring of power pins of a much more complicated system that that MCU controls the power of, with the necessity to react swiftly when something is not right with power.
So the display acting weird had me worried, because of that possible connection.
I at first checked after a few hunches: toggling pins in strategic places & looking on the scope revealed nothing suspicious, all running like a clockwork the whole time.

Then I decided to not waste more time guessing & trying things and went to the git log.

I kept that one TortoiseGit log window open for the whole time on one of the screens for a constant overview of what I did.

- Committed last changed files, to have a clean state. *1
- Meditated for 10 seconds to then point, in the log *2 - to a commit at a date where I felt I might have seen this thing working correctly for the last time.
- create new branch from chosen commit & switch to new branch *3
- in the, open all the time, project IDE window - build & run @ target. An iteration like this far takes in the low tens of seconds (okay maybe not with millions of changed source files).
- bug still there. So I went down in the log for about as far as I had already gone with the first guess from the top. That spanned a few weeks worth of several daily commits.
- it worked again!
- so I went in the log, picked the commit in the center between the 1st two picks
- no worky. So I go to the commit in the center between this one and the last that worked.
- and so forth
- after 5 or 6 steps, I identified the commit which introduced the breaking change.
(- delete unneeded branches)

You probably recognized that pattern: Binary search, logarithmic time. Mucho gusto.
Which you can also do with zipped up code - but how fine is the granularity of changes in which you bother to do that? It is less convenient.
And at the very latest, when you try to map the concept of doing experiments in different "branches" of the project timeline, that zip file approach must be roughly like walking through a tar pit. Perhaps sleep deprived and confused to boot. Creating more errors and confusion.

A diff of the last working vs. first not-working showed a bunch of files. One was directly connected to the module doing the display stuff - but changes there were just comments, nothing that could produce the effect.
Another file had a few code changes light up that I now remembered. That was also used by the visibly affected module.
Turns out my initial hunch was right and adding some function template specializations (possible since C++14 or so) to a "library" (compiled with the project)
were not caught otherwise because no test checked all the type variants, and one of them was missing a -1 at the end. Copypasta error.
The compiler decided, though, that the new template was now a better fit for an existing piece of code than an older more broad one, and so the bug came into effect in that one place. It would have produced future bugs.
Yeah, making things more expressive and convenient in a language that is not historically so can sometimes get, eh, surprizing 

Anyway, the whole thing might have taken 15 minutes or so. I would never have guessed that, and also not suspected the particular function because "it worked before!" and "too basic to be likely wrong". The visible effect was weird enough to not make an obvious suggestion to the nature of what was wrong, and stepping through it with a debugger doesn't work well with an interrupt needing to do its thing to make the effect visible - if you don't know what it is, in this case, yon dont know what "wrong variable values" are at every point of stepping through the code, necessarily. I guess I would have tried to create some (high speed, compact data, local) logging facilities (not so wastefully easy on a MCU as, say, on a PC in python ) and paint a picture of what exactly all conencted things do in what order and how the values behave over time.
This history search approach was much quicker.
It's like being able so save at any point in a computer game and then later load a save to look - where was that again how I got to that place? Just way better.


*1 (one could also "stash", but you better not forget about doing that, hehe)
*2 your typical vertical list of items with newest entry @ top. With date/time and a rather long hash of the commit as ID.
*3 What this does is to transform your project folder (more precisely, local repository copy) to the state of that commit - but you could also make experiments for debugging purposes in that branch without affecting the master branch or others, but still track these experiments, to later come back to them when you learned some things looking from another angle in another branch.
In the TortoiseGit log window, you will see prominently coloered tags appear at a commit that is now also the start of a branch. So you can watch your partitioning algorithm progress like a living textbook example.

Posted: 10/2/2020 8:34:28 PM
dewster

From: Northern NJ, USA

Joined: 2/17/2012

Bowling For Presets

My fall-back lately for a quick and easy preset is something that percussively "rings".  Noticed yesterday that an empty bowl on the kitchen table had a nice bell-like quality.  So I grabbed two others in the pantry and had a go at them:

From outer to inner: #1 large pasta serving bowl, #2 medium pasta serving bowl, #3 medium white porcelain serving bowl.

I used one of my wine glass presets as the base, squinted at the various spectral peaks in Audacity, and went to town.  Here's the result, with real bowl #1 first, synth bowl #1 second, real bowl #2 third, etc.: [MP3].

They're trivial presets, but fun to do, and if nothing else help keep me in shape to tackle the tougher ones.

Posted: 10/3/2020 3:35:52 PM
dewster

From: Northern NJ, USA

Joined: 2/17/2012

"Fate decided to conspire with the >= 2 SW devs here unabashedly vowing for SCM, to make said class of tools seem more enticing." - tinkeringdude

I do appreciate that!  Your explanation is quite fascinating, and really showcases the utility and convenience of version control.

"And at the very latest, when you try to map the concept of doing experiments in different "branches" of the project timeline, that zip file approach must be roughly like walking through a tar pit."

Yes, I've lost entire days backing out problematic code changes in thread 7.  Being largely non-real time, it executes the most complex of the code, and more of it than any other thread.  Since this is my own processor design, I don't have the luxury of a JTAG debugger, and have to rely instead on simulation (where many bugs won't show their faces without proper, often rather involved and circumstantial, stimulation).

"The compiler decided, though, that the new template was now a better fit for an existing piece of code than an older more broad one, and so the bug came into effect in that one place. It would have produced future bugs.  Yeah, making things more expressive and convenient in a language that is not historically so can sometimes get, eh, surprizing"

I can't imagine doing anything embedded / real-time with C++?  I don't constantly count every little cycle with the D-Lev SW running on Hive, but there is a strict budget when 8 threads have to equally share 180MHz worth of cycles interrupted at 48kHz (468 cycles each to be exact) and I have a HW flag for each thread that tells me when real-time has been exceed (calling a new interrupt while servicing one - which causes the new interrupt to be ignored and discarded).

It's actually pretty amazing the 48kHz DSP you can accomplish with a 33x33=65 bit multiplier ALU running at 180 MIPS.  Assembly really helps in squeezing it all out IMO (and I'm a control freak, so there's that).

Posted: 10/3/2020 7:37:52 PM
tinkeringdude

From: Germany

Joined: 8/30/2014


I can't imagine doing anything embedded / real-time with C++?  I don't constantly count every little cycle with the D-Lev SW running on Hive, but there is a strict budget when 8 threads have to equally share 180MHz worth of cycles interrupted at 48kHz

You don't need to use a hydraulic drill where poking a hole with a needle will do, even if you bought that new big toolbox.
C++ compiler generated executables, from tests I've seen, are not inherently slower than C ones. (they can be faster where, in equivalent C code, you try to replicate concepts that have language features in C++ - because C++ "gets" what you want to do and has optimizations for that, the C compiler doesn't.)
What's slow is when you use parts of the standard library that are prone to using dynamic memory allocation.
Exception mechanism - same thing. (don't use it)
When you do something that needs to be as fast as possible, virtual inheritance might also not be good - but this tends to be overdramatized.
When big objects are passed by value, it used to be the case that this was slow(er than necessary) because of copying. Since the notion of "move semantics" copies can be avoided more easily. (for your own stuff, it takes some extra effort, std library stuff tends to have it built in - if you don't want dynamic allocation at all, these things are probably off the table anyw, though - although I have not looked so deeply into it)

You don't even need to use anything OO related to reap some benefits of current C++. Can still stay procedural, but use function templates (incl. the readily available ones of the library, like subsets of those in algorithm, limits, type_traits, etc.).

Wth recent C++ versions, I decidedly switched to C++ vs. C for embedded bare metal projects exactly because it allows to code better for two to three: The product, maintainer(s) and author of the code alike. (in the latter case, reducing the tedium)
The aforementioned subset of standard library headers that offer template stuff that you can use to work at compile-time, in conjunction with the growing ever mightier "constexpr" keyword, allows you to make all sorts of computations at compile time and make their results part of the image that lands in program memory, in the project code itself, so everyone gets what's going on. Not some magic hard-coded tables computed elsewhere.
This is far less painful and bug inviting than macros. (and try to use a macro that does something interesting within another macro... whaa whaa whaaaaa... You'd need multipass - some people rely on external tools for that. How nice if the language you are supposedly using actually allows you do express it.)

You can (more safely) use things that would  be considered "dirty" - put code next to it that makes checks as complex or exhaustive as you need them, to test assumptions made in the code. Like with regards to two places in the code relying on the same order of something, implicitly mapping something by order, to not take extra memory for explicitly doing so. Which is prone to create bugs when any part of it may get altered later. Or if you assemble some data structure that is to lie in program memory without any run-time or boot time overhead - and you want to check its consistency in some way, because somewhat repetitive lines of assignments let errors be overlooked easily sometimes - just code those checks in a constexpr function and put a call to that function int a static_assert. If it fails, you get a compile-time error. None of this needs to be executed on the target. It will not even be code on the target and waste space if nobody calls it at run-time.
This increases compile times. Preferable compared to time wasted debugging bugs created due to some foul-ish optimization.
And with multithread compilation on an 8-core machine with 16 hyperthreads or so, code lying on an SSD - with a project size of the typical MCU project, who notices that anyway. Especially if you use forward declaration and references in headers where possible, not triggering a recompile chain reaction every time anything changes anywhere.

I was mildly amused when I recently tried how far this constexpr goes by now, and something like this compiled and worked as... let's say suspected:

constexpr unsigned NBITS = Log2( NextPOT(someEnum_count or tableSize or what was it) );
struct alignas(uint8_t) SomeStruct
{
    uint8_t A: NBITS, B: NBITS, C: NBITS;
};

I.e. a bitfield where the bit widths are not literals, but something you computed at compile-time.
(not useful when you (ab)use a bitfield for hardware register mapping, but sometimes you just want to store stuff as compact as possible conveniently auto-trackable by evolving code.)
Something that's dependant on other parts of the code with its own constants - and you really want once place that is the "master" of it all, the place where you turn screws to adjust something, while all dependant parts follow automatically.
One can do such things with macros, but far less extensive (multipass!) and with no type safety.

Log2 and NextPOT are constexpr function templates in my library. GCC has __builtin_log2 variants which are all float of a size or another - not good when you happen to call such a function at run-time for a change.

Oh. And using the algorithms header, and things like std::min, ...max instead of stupid surprize macros for the same purpose.
And all the code duplication and maintenance worries due to C stuff that's not macros working with one type (or if nothing gets stored compactly, using a biggest-type-of-family function to call with smaller compatible args... that's not always good)



It's actually pretty amazing the 48kHz DSP you can accomplish with a 33x33=65 bit multiplier ALU running at 180 MIPS.  Assembly really helps in squeezing it all out IMO (and I'm a control freak, so there's that).

I did not have to use asm on the ARM Cortex Ms, but if it seemed to make sense I wouldn't hesitate to use it on MCU projects if it would probably be re-usable stuff on other such projects.
I could only imagine doing that when doing something in an ISR running at a rate that makes the MCU sweat and using a bigger one isn't an option for some reason. Just not likely to happen currently. At previous work with, in the widest sense, consumer electronics, where every cent per unit counts - that could have been more likely.
Or some inner loops that are speed critical. But not the overall logic of the program that eats only a few % of the processing.
Ok, your application is basically all data processing. But it proably also has places in the code that get executed far less frequently than your "hot spots". And trying to optimize a group of code spots that in total eats 5%, and you manage to make it eat 4% - I guess there are things where the time is more well spent, let the compiler do the rest

Posted: 10/3/2020 8:00:17 PM
tinkeringdude

From: Germany

Joined: 8/30/2014

But anyway, "C++ on embedded", well. The one I was using recently didn't even have hardware div.
But have you seen these puny microcontrollers lately...
https://www.youtube.com/watch?v=bD7VZN0SK7s&t=20s

Posted: 10/4/2020 2:29:04 AM
dewster

From: Northern NJ, USA

Joined: 2/17/2012

Always extremely interesting to hear first person accounts of the advanced levels technology has reached lately.  Programming seems largely limited by the imagination now, and there are quite exciting things going in that huge field. 

So much of engineering is KISS, and where you draw that line can be both practical and highly personal.  Determination is still a significant variable, and coming even fractionally up to speed on any field can be quite daunting and demanding even if you are entirely determined.  And working with or within any group can be an extra chore.

"The one I was using recently didn't even have hardware div."  - tinkeringdude

Not that it's entirely representative, but I do explicit integer division exactly once in the D-Lev SW (for the displayed resonator delay "frequency").  And there are two implicit floating point inverses (one for each axis: log2, negative, exp2 - fractional negative power for linearizaion) but outside of those I've found faster and more efficient polynomial approaches.  Hardware division is a really weird thing to ask an ALU to do, and floating point isn't a good fit either.  Both completely screw with numeric representation in the registers, not to mention the length of the pipeline, and require something completely crazy (IMO) like multi-cycle microcode.  If double result multiplication weren't so incredibly powerful (IMO it is the #1 thing after bit width (you just gotta have 32 bits to do significant DSP) that separates utterly trivial processors from the truly useful) I might leave it out too, though memory access is roughly equivalent in terms of pipeline, and you have to have that.

I needed a small FPGA anyway to tightly control LC stimulation / acquisition timing (the hammer I know), and the rest just sitting there was my siren song.  If FPGA interconnect speed wasn't ~10% that of straight silicon, and power draw wasn't ~1000% (or more), Hive wouldn't look like so much of a vanity project.  But sometimes you just have to follow a path to the end.  Though if I can design a processor then just about anyone can, and I don't want to belong to any club that would have me as a member! :-).

[EDIT] Wanted to add: there's a huge "divide" between mathematics and DSP needs.  For DSP you just need a sufficiently good answer (precision to so many bits).  For math you need all the bits possible to be 100% correct, even down to the rounding, to have truly portable code, and this can lead to HW traps to incredibly slow SW routines to handle float denorms and such (which shows you just how bad a fit floats can be to ALU HW).  Two entirely different worlds targeting the same GP processors.  Bass and treble controls vs. atom bomb sims.

I sometimes wonder if IEEE float standardization was in some sense a mistake.  And I used to think the Intel x86 80 bit float was really weird (and it is generally portrayed as such), but it makes a lot of sense because there's no precision loss for 64 bit integer representations.  But sticking those in 2^n memory was a bridge too far (floats are one huge pile of extremely useful, yet extremely sub-optimal HW trade-offs - nature abhors a vacuum and floats).

[EDIT2] I know I sound like a broken record, but I just don't think the target processor silicon needs to be anywhere near as complex as it is to get meaningful things done.  IMO much of the weirdness in the SW world is directly caused by the weirdness in the HW world.  Drill down and you can't even predict what a branch predictor will do.  Speculative execution, caches, there's just too much unnecessary state, and it all can get in the way of getting things done, or even understanding what's going on when it acts up, not to mention the never ending security holes.  I just want to multiply these two numbers together sometime today and not have some random kid on the internet know the result before me.  It's rather farcical, and I often wish the entire computational world could be started fresh with a clean slate, every level from HW to SW kept as dead simple as humanly possible.

Posted: 10/4/2020 10:16:18 AM
tinkeringdude

From: Germany

Joined: 8/30/2014


there's a huge "divide" between mathematics and DSP needs.  For DSP you just need a sufficiently good answer (precision to so many bits).  For math you need all the bits possible to be 100% correct, even down to the rounding, to have truly portable code

vs.

I sometimes wonder if IEEE float standardization was in some sense a mistake.

Wha? I thought *that* was the thing to make stuff portable. One less source of unpredictability, diverging float implementations.
At least you know the kind of errors to expect.

Even though probably not as nasty (or dangerous) as with control loops, I learned the hard way that float is not magic when implementing ways of preparing artificial world geometry for interactive display, a world of arbitrarily oriented polygons which can split the world in two, to divide and conquer the problem of determining best rendering order, what to render at all, and collision detection from any point or direction, off-loaded as a big pre-computation step, as was common in the early 2000's. Back in the day when gfx HW was slow, had no to low memory for such things and to render or not to render a single triangle was the question.
Splitting the world up might sometimes create splinters that don't render well, or create numerical map problems further down the road, unless such cases get special treatment / are perhaps prevented in the first  place - slapping generous tolerances on everywhere was not a panacea, unfortunately Although a lot of the pain was gone when I switched to double precision for off-line computations. Worth every minute waited longer for the 32bit machine chewing on that   If I wasn't such a math dummy my own chewing on that would probably have been shorter.


I just don't think the target processor silicon needs to be anywhere near as complex as it is to get meaningful things done.  IMO much of the weirdness in the SW world is directly caused by the weirdness in the HW world.  Drill down and you can't even predict what a branch predictor will do.  Speculative execution, caches, there's just too much unnecessary state

If you don't want computers to be limited to DRAM speeds (ok, they've come a long way, too!), I have not heard of a thing where re-starting the computation world would make this fundamental problem go away.

Making code cache-friendly and branch-predictable are chores, but not as traumatizing as dealing with the lines of attack against developer (and ultimately user) that are 1) instruction / memory access reordering by the compiler and 2) the CPU itself.
When I first learned about these things I was asking whether they could possibly be serious or I just misunderstood, LOL.
And at that time there were no language constructs to really deal with that, no way around inserting an inline ASM op code here and there. So much for portability. (there are some C++ things now for the compiler end)
I have not seen cost-benefit numbers of this aspect of things. I am slightly skeptical
At the end of the day, I'm the one who has to wear the tinfoil hat and ward off those foes, they're all plotting against me, I knew it!!!
(okay, the compiler was always against me, I noticed that right the first time we met!)

You must be logged in to post a reply. Please log in or register for a new account.