"Hehe, no thinking of trolling, there was enough truth in that post, just confused by some details or the angle of view."
That's a relief! I was wielding a pretty broad brush and didn't mean to get any paint on you! :-)
"Maybe it's also a matter of taste. I can't stand Apple stuff. Who knows in what weird way that works they claim is genius!"
Well, some folks I like a lot like them a lot, so I try not to step on toes, but as one wag stated they seem to be more of a fashion company at this point.
"Lenovo has this interesting red rubber thing on some of their notebooks. I don't know what's inside. You apply some force with a finger in X,Y direction, and barely move/deform it at all, but it reacts like some sort of analog joystick, with minute movements."
I had one of those as corporate issue and could never get into the eraser thingie, though I can't say I tried all that hard. There are comparisons to naughty bits that I shan't go into. :-)
"A relative uses an USB trackball by Logitech(?) for some hand problems which make use of a mouse very uncomfortable. So that still exists, if as an external device."
I've owned and used a lot of trackballs, the early ones rode on rods that, if they weren't properly hardened and heat treated, actually wore away pretty quickly with the plastic ball rubbing against them - something that seems kind of impossible but there it is. Logitech makes an OK one but lacks a scroll wheel, which is a must IMO. Used a Kensington Expert for years, works well if you remove the magnet in the outer dial, but the whole thing is way too high in the sky, leading to wrist fatigue. Good build but overpriced. The default pointing device in many early laptops was a small trackball, and I have no earthly idea why that changed, as everything else seems quite inferior.
"Lol! Code like I imagine from reading this ("emergent...") warrants the emergence of a large fly swatter right on somebodies behind."
It seems quite common among HDL hackers. A counter or two coupled to async statements, sometimes the flops are off somewhere else, grouped together for no obvious reason. The worst is schematic capture of "standard logic" type boxes, no clue as to what's going on, just a pile of unnamed wires. Best practice is to use state machines, but they don't interact with counters all that well - get two clocked things together and the timing becomes almost impossible to simulate in your head. I've sort of come full circle, from counters + async, to state machines, and back again, though I use state machines when they really make sense. Clean consistent code layout, comments for most sub blocks, short but descriptive signal naming, "right sized" modules (not too complicated but not trivial either), and all with an eye towards ease of verification.
"I have no clue of HW dev equivalents or applicability, but in software, for this there are unit tests (using test frameworks for different languages & environments, to not repeat all the work common to all tests, and to automate stuff)"
HDL has this too, for medium to large projects there is often is a sim guy who does nothing (!) but run scenarios with a gauntlet of theoretical and previous pass / failure cases. It's good and bad, I think it can make HDL coders (even more) sloppy and lazy. For me testing is an absolutely essential and enjoyable experience, but I have zero interest in catching anyone else's bugs.
"Of course one can't practically cover everything. But to check whether a module does basically what it's supposed to, how it reacts to edge cases.
If you then change something and the tests are run through and something changed the behavior, it explodes in your face and you know you need to fix it. If it seems difficult or complicated to test edge cases of a module, that module is probably less of a module and more of a ball of wool *after* the cat played with it. Or worse, the whole system is like that. Dependencies make isolated testing hard / impossible - so just trying to do that will even warn you about maintainability problems in the code one may not have thought about before, because one was not forced to."
Exactly. Preach it brother!
"Indeed, and some of the "best practices" also help make it clearer. If e.g. code is composed in a way that there's never too much complexity in one place, it's easier to follow than a page of code that jumps between levels of abstractions all the time, forcing the reader to context-switch a lot, which is mentally exhausting, and makes one error prone in attempts of getting what's going on, for at some point your mind will just be in the wrong frame...
Which means: keep things simple, and reasonably split responsibility into smaller units of code (files, modules, classes, whatever)."
Yes, KISS, modularity for ease of understanding and verification.
"When the editor is showing a really tiny scroll bar, one might as well take the hint that the source file is too damn large"
LOL!
"And then of course he had variables accessed from everywhere throughout the system and couldn't possibly be aware of all potential race conditions. Not to mention that it's smelly to begin with that accesses of the same variable from different logical layers of the program are happening... as well as HW driver functions being accessed from different levels in the logical hierarchy of the program... who needs a well defined flow structure or resource access, or clearly separated responsibilities anyway!?"
Was this Toyota mission critical SW? Probably not, but it sounds like it. There was a SW type analyzing their code for a court case and it was horrible, unsafe globals, giant cryptic functions, processes dying for lack of real time, you name it and they were doing it wrong. He could flip a single bit in the code and the car would take off like an uncontrollable rocket. Happened a couple of times to my Dad in a used Toyota they bought, sitting at a stop and the thing revved up out of the blue and just about killed him, his quick thinking was to cram on the brakes and turn off the ignition. Which is what got me looking into the issue. Toyota lost the case, which is good because that code killed some people, but it seems they worked really hard to cover it up and blame the victims. Maybe they all do it (?) but I'll never buy a Toyota.
"I couldn't comment on hardware languages. I sometimes suspected when some EE wrote some funny piece of software that they were thinking in hardware design mode which didn't translate too well. But as I have at best very sketchy ideas about what HW design is like, I don't give too much weight to that hypothesis We'll see, if the day ever comes that I actually look into FPGA, whether I'll do the reverse and my EE colleague laughs at me for it"
You're giving them way too much credit! :-) List type programming and HDLs are more alike than different, and share many of the same techniques and approaches. But managing anything beyond the simplest concurrency requires a few new crutches (i.e. sketching of the logic showing the domains bordered by flops as well as any flop crossing feedback/forward; and sketching of waveform tables) and of course being familiar with basic digital constructs and their code representations. At some point higher level architectural issues and therefore interfaces and handshakes become the focus and that's when the real fun starts, though the devil as always remains below.