Zero = Denorm!
I've been working on floating point algorithms and have a polynomial version of square root pretty much finished. I finally figured out how to simply adjust poly coefficients by hand when the poly degree is high without the whole thing going crazy: just apply the opposite adjustment to the next higher term as well - this very much localizes the adjustment. I've probably spent weeks adjusting poly terms, wish I'd stumbled on this a lot earlier.
Anyway, I decided to go back and do the simple things like float multiplication, addition, subtraction, etc. and it turns out they aren't all that straightforward to do and require gobs of cycles. Signed significands introduce complexity, but the surprising thing is zero, which, if you think about it, is a denorm! So even if you plan to flush denorms to zero, you still have to give zero itself special treatment everywhere. I probably should have started with float multiplication and worked my way up to the heavier algorithms, but what do you do?
(The reason zero is a float denorm is because the exponent and decimal aligned significand make floats semi-logarithmic. And while logarithmic systems can get arbitrarily close to zero, they can never reach it. LOG[0] = negative infinity.)
To deal with floats I added a OP_SGN opcode to Hive, which returns -1 for negative numbers and +1 for non-negative. By doing say:
s3 := SGN[s0]
P0 *= P3
you get the absolute value of the number in s0. And the sign itself can be kept for later use if desired.
It's kind of ironic that the same hardware guys who pushed the IEEE float committee hard for denorms then went on to implement them in super inefficient ways.
"...premature optimization is the root of all evil." - Donald Knuth
Encountered the above quote and, while I agree with the spirit of it, often during a local optimization process you gain deeper insights that are difficult to pick up again later. So you might as well put a certain level of effort into what you are doing before moving on, particularly if it is thorny and full of nuance. I don't dare drop this algorithm stuff until I'm substantially past it as it's just too detailed to quickly and easily relearn and get back to the same spot. When I was gainfully employed I remember one coder who spent most of the day coming up to speed on what he did the day before, and only then would he add to or change things, often in the late evening. Heads can only hold so much.