Segmentation fault, do you have GDB set up? Well, it's a lot more practical/sane to use from within an IDE (for everyday work - for some special scenarios, knowing your ways around the CLI is nice, but I don't know it by heart).
Debugging with GDB should land you at the piece of code where a sigsegv happens - or if it's in library code, maybe in disassembly... but the callstack should be there for you to traverse backwards and see where it came from.
Let's Design and Build a (mostly) Digital Theremin!
"Segmentation fault, do you have GDB set up?" - tinkeringdude
Yes, but it barely told me anything so maybe something's off (was using it on the command line?). The article I read on how to do it seems good and it promised a paradise of info at the end: [LINK]. I also tried the other two things she mentions there: valgrind (same, very little info) and ASAN (wouldn't compile). Took me forever just to figure out how to get the core dump to go to a file.
I removed the fifo from the keyboard scanner, it's simpler to leave unused data in the system key byte queue and remove it as needed, rather than empty it all at once to an intermediate buffer and read from there. At least that's working like a champ.
[EDIT] Didn't see your post before this until now.
"The executable probably doesn't start out of the file browser since it's not a GUI based program and isn't kind enough to open a terminal emulator with a shell to also display the things you'd like to see."
I think it's a problem with what feeds nautilus info re. file types (but I'm no expert): [LINK]. Unix unfortunately doesn't have a dedicated extension or anything for executables. (And I hate to gripe, but anything less than the Cadillac of all file browsers in Unix of all things seems kinda nuts. The WinXP browser runs circles around anything I've seen in Linux land. What's with shortcuts on the left instead of a real expandable tree view?)
"If your program runs long enough to verify this, try a program like htop (might even be installed already) in a dedicated terminal to look whether you can see your process running, totally without a shell to output its text to (or... maybe it insta-crashes if there is no shell, I'm not sure. Still no mistake to get acquainted with htop)"
Thank! I'll definitely look into it.
"The latter can be ameliorated by using just one line at the top of each header file:
#pragma once"
Thank you for that too! Might actually give it a shot (though I'm not too keen on doubling the file count).
"2. but how in the world can you make a program beyond hello world that can do without header files? "
I feel like I'm skirting the upper bound of awkwardness / complexity in not using headers. Have to make sure that if A relies on B to be instantiated that B doesn't in any way rely on A. I don't even have a project file (just #include all files in the top and sub files once).
Even my crummy home made assembler pulls all the files in before it hollers about something being missing. The sequence of declaration / definition shouldn't be an issue in a SW project. I can see why these things are called "languages" as they are difficult to learn and almost impossible to master.
Command Line Weirdness
I'm implementing command line features like in an OS console for my Hive simulator (and also for my proposed command line calculator). So I'm trying stuff out on the Ubuntu command line to try to get some sense of how they implemented various things, particularly command line entry, editing, and recall.
With text on the command line, the left and right arrow keys move the cursor around. If you start typing with the cursor inside a word on the line, it inserts the character before the cursor (this is how text is placed at the end of the command line as well). The insert key doesn't change the mode (e.g. from insert to overwrite). The backspace key deletes the character before the cursor and the cursor moves to the left once (remaining anchored on a character). The delete key deletes the character under the cursor, and the cursor remains anchored in place (with the text to the right shifting left once). When the CTRL key is held down, the right arrow moves the cursor to the next space on the line, but the left arrow moves the cursor to the beginning of the previous word. I think the CTRL arrow behavior should be more consistent, so I've made mine go to the first character of the word in both directions.
All of the above is pretty easy to implement via a C++ string and its class functions.
Command past history can be recalled via the up and down arrow keys. Up arrow once recalls the most recent command, twice the command before that, etc. And if you are in the history then the down arrow moves from past to present, stopping on the current command line (whether it's been executed or not). What's weird is you are actually editing the history entry when a past command is recalled for modification! What's also weird is the current command line is placed in the command history even if it isn't executed!
So if I type:
1 (up) (backspace) 2 (up) (backspace) 3
Then:
(down) returns 2, (down) returns 1
And I can get the same results by juxtaposing (up) and (down) if I'm in the history. Kinda unexpected. I naively thought only commands that were executed ended up in the command buffer. Though decoupling the command storage from the command execution probably simplifies things, particularly for Linux where everything seems to be line oriented.
Anyway, the command history buffer is fairly weird. It isn't a LIFO or a FIFO, but kind of circular buffer that gets an element added to it each time. So if there is a fixed maximum depth N, then with use it will eventually become full. When full it needs to remember the most recent N commands via overwriting and bookkeeping. I don't believe there is the need nor the desire for a super deep buffer for my stuff, as many of the function keys in my sim utilize the command line for their implementation (e.g. F6 inserts " cyc " on the command line to do a single cycle) thus somewhat polluting the history.
[EDIT] I think it's pretty odd to have mutable command history - it really shouldn't be. The edit buffer should be separate, with the up and down arrows loading from history to buffer. Execution should be the only way to store the edit buffer to the command history.
Looking at my code, it's also kind of odd they didn't pick the down arrow for past history rather than the up arrow, as accessing the past requires a decrement of the read pointer / address. Maybe they were thinking like a text editor, where past entries are up the page? I'm conditioned so I don't care, but I wonder if the convention they picked ever strikes noobs as backwards.
You probably aren't reading this for the first time, but I'll mention those things anyway:
I feel like I'm skirting the upper bound of awkwardness / complexity in not using headers.
Depending on how much stuff you have there, you could also end up with more "locally condensed complexity" that way, though.
If you find yourself to use crutches to know what's what in your big blob of everything, and instroduce things which remind you f things from other parts of the system, or other abstraction levels, in places where you shouldn't worry about any of it, there is, in a way, more complexity in place at a time, and it fources you to think more complicated or be distracted.
Now I don't know what your code and its resulting amalgamation look like, so maybe I speak in riddles, but I'll try to describe potentially occuring problems.
Could be it doesn't apply to your particular setup and what still lies ahead implementation wise.
In plain C, headers help encapsulate and decouple by "information hiding" - well not from you if you have the source, but the access is restricted.
If you create really only one singular compilation unit by recursively preprocessor-pasting everything into one c(pp) file (if that's what you're doing), you don't have that separation. Hypothetically, there could even be a bug in one c file A), referencing a variable by a name that does not exist in that file and was not intended, but that exists in c file B), and both are include-pasted into c file C), then code in A may have side effects in file B, although you don't include A in B, but they are pasted in an (un)lucky order in C - and you get really confused. Not noticing the error, when it compiles so nicely, although it shouldn't in the first place.
(sure this can be made less likely... by crutches I was alluding to)
If you declare every single routine and module-scope variable in a .c file (the "module") as static, then those won't be accessible from anywhere else, whether they know the name (or accidentally stumbled upon, or get copy & paste inserted by accident) or not (and auto complete won't list them in the first place, unless the implementation is junk). The few really necessary interface routines remain not-static ("extern", default) & get a declaration in the header.
You could still pass around pointers, of course - via defined boundaries, which sometimes makes sense. But that is so deliberate that, in comparison to everything just always being "open", accidents should be much harder to produce.
It also means you can, for "private" (static) stuff, keep your names short, named in a way which makes sense in module scope, at which you only need to be thinking when implementing a module (save for the interface contract).
In that regard C++ namespaces are an even further improvement. Things within the (sub-) namespaces are named approproately shortly, thinking only about things relevant within their immediate scope (and external dependencies), and no fear of stupid name collisions and resorting to string BAKE_FRIKKIN_POOR_MANS_NAMESPACE_TAGS_INTO_EVERY_ENTITY_NAME = "Bob"; and clutter everything with references to concepts you should not need to be distracted by when thinking about the innards of a certain module at a certain level, like making surrounding higher levels around where you're at explicit for every 3 words you say.
My header files are pretty lean usually. Some MCU vendor example code sets bad examples of dumping #define "constants" en masse in there which have no business being there, as not part of the interface & no concern to the client code...
IDEs, like Eclipse, can generate a pair of e.g. .hpp + .cpp files based on a class name and a few option checkboxes, generate pre defined (customizable) patterns / code skeleton.
There are probably scripts / macros for certain IDEs which generate a complete matching header from a largely finished .c file, grabbing everything that's not static, to reduce effort. (I'm not using those things so much)
Some people structure their files such that all header files are in one "header" top folder (or virtual folder, only shown like this in the IDE), and all the .cpp files in a "source" folder - some IDEs do it like that by default. Including this false dichotomy of "header" vs. "source" naming *clenches fists*.
Anyway - that way you won't see 2 files of everything in a (virtual IDE) folder all the time - I personally hate that separation, but to each his own.
For some things, you may not want both files, but only headers.
Sometimes it's nice to put struct, enum etc declarations in a header or headers within a folder that encapsulates logically connected (sub-) modules, which may partially share those things, and it would make headers with logically close class prototypes unnecessarilly messy if all the little but plenty stuff was also in there.
Then there are class template implementation of things which, alas, need to be in header files due to technical reasons...
C++ of course has the problem that this pseudo separation of interface and implementation files is more of a farce due to every private and protected member variable and method being visible in the header aka interface file, so that you can glean some insights about the implementation and hence become reliant on it, potentially at your peril.
Some people run once more around the block to fix that in some deemed important places by making a member variable a pointer of a forward-declared class with a hidden implementation in the .cpp file, so the "client" sees only the forward declaration, not the gory details. It's called "Pimpl" (private (or "pointer to") implementation) pattern... (this is popular for libararies with header-only + binary delivery format, I guess)
All that said, C/C++ headers are quite a hack of a mechanism, keeping the tool chain simpler back then, and leaving it open to all sorts of hackery to achieve things not supported per se (my guess).
I don't even have a project file (just #include all files in the top and sub files once).
*If* you were to use the "Eclipse for C++ developers" bundle, at least that could take off some work in that regard.
By default, when selecting managed build in the "new project" wizard, simply all (recognized as C/C++ source) files in all folders and subfolders you add to your project directory are automatically added for compilation. It usually just works, zero manual labor.
There is also the (still experimental?) option of using a managed, but still makefile based project. I.e. you could later grab the makefile and build the stuff in other surroundings, I guess, without having to manually fiddle with it, as long as used in Eclipse, anyway.
IIRC its tree view also has display options to show as-is in the file system, or do some automatic category sortings or stuff (If I'm not mixing that up with another IDE...), e.g. to not see the headers in the same folder as c files.
The sequence of declaration / definition shouldn't be an issue in a SW project
Amen.
Newer languages tend not to suffer from nonsense like that.
I am still reluctant to try it, but I am keeping half an eye on what Rust is doing, supposedly the new "systems programming" language, which, IIRC, is also less primitive than C/C++ in those regards.
But I don't see huge support for it yet.
Does it really *modify* the history, as in, the old entry from way back, or only modify "the history" in that it adds one entry? (the latter was my perception, but I didn't pay so close attention really)
The Ctrl + L/R cursor thing, doesn't notepad++ and other windows editors do it thus, too?
It moves between "beginning of word" (boundary between alphanumerical vs. other character groups) positions (or what it perceives as word, or in a syntax aware scenario editor, also recognizing sections of a CamelCaseExpression), whether forward or backward - so in that way, it is consistent.
But anyway, there are a lot of weird things.
Terminal still thinks a DEC terminal is top notch, many of the nice special function keys of this newfangled contraption called the "IBM keyboard" not working as intended in the standard Linux text processing programs of any sort, even the supposedly more "user friendly" ones.
Still partying like it's 1969.
If you ever have to edit something in VI(M) (e.g. login in from remote terminal emulator), you'll notice the INS key is used for *something*. (you can't add new text otherwise)
Up or down... I never pay attention to that. I pressed key and saw what it did, and remember that
Probably played too many video games in the past to bother with a fixed idea of how up or own should move (key up = screen up vs. flight stick forward = nose down, both rotating in same angular direction? Both makes sense).
But I can't stand it when the mouse does it the other way than I expect, for some funny reason.
I.e. Apple scrolls wrong. Instead of moving a tiny camera over the virtual paper, they move 10 miles of virtual paper past the tiny camera - yeah right Apple, as if that made any sense - check mate!
"If you create really only one singular compilation unit by recursively preprocessor-pasting everything into one c(pp) file (if that's what you're doing), you don't have that separation. Hypothetically, there could even be a bug in one c file A), referencing a variable by a name that does not exist in that file and was not intended, but that exists in c file B), and both are include-pasted into c file C), then code in A may have side effects in file B, although you don't include A in B, but they are pasted in an (un)lucky order in C - and you get really confused." - tinkeringdude
I'm probably making my code sound worse than it is, though I can imagine some coders would look at it and toss their breakfast (as I sort of feel when coming back to it cold, not so much after re-acclimating to it).
Since I implemented simple scoping in my HAL assembly language, I'm acutely aware of it when coding in other languages. For the sim C++ I've got a hive_pkg.cpp file full of constants and helper functions, and this gets #included after the C++ libraries so everything following can see that info (poor man's package, seeing as how C++ lacks such). The rest I do as classes if possible, functions if not, which helps to minimize name space pollution. But classes give me the most trouble when doing things this way, due to declaration order dependencies and the general awkwardness of class header files - otherwise I would aim for everything being encapsulated in classes. Class dependencies, and the lack of a convenient "read only" access in them, more or less force me to have more "globals" (public state holders) than I would like inside certain classes, which seems kinda perverse.
I use project files in Quartus (FPGA coding) but notice sequence dependencies in there too. If I don't have the package file (System Verilog has a real packaging system that's very nice) at the top of the list the build fails. You know something's up when there are big honkering buttons in the project file dialog that repostion files up and down the list! I'm thinking of going with a single top file there too, with #includes to control the declaration order.
"I am still reluctant to try it, but I am keeping half an eye on what Rust is doing, supposedly the new "systems programming" language, which, IIRC, is also less primitive than C/C++ in those regards."
Rust interests me too. Though it still has pointers, which I loathe and avoid whenever possible. And "->" syntax always strikes me as clunky for some reason.
"Does it really *modify* the history, as in, the old entry from way back, or only modify "the history" in that it adds one entry?"
I'm looking at it again and it acts like there are two command histories, one that stores command lines that were executed, and another that stores things that haven't been executed. If you recall a past command and edit it but don't execute it, then use the up/down arrows to move away from it and then back, the command remains edited. You can edit any/all commands in the history buffer and the edits will stick even if you move around in the buffer. But as soon as you execute a command all of the edits you did in the history disappear.
"Poor man's package". There are "packages" at link time, if you will. You can certainly stuff several classes' implementations into one cpp file, if they belong together (or not, but...), and that will be a "package", just that you ought to know where it's corresponding header is located.
If you keep a folder structure that matches the logical "package" hierarchy, and add the top level folder that contains the highest level packages' folders (with sub packages as sub folders, or finally, cpp files) as "additional include paths" in e.g. Eclipse managed build (or do the equivalent with make files - although IMO that technology hasn't aged well... but it's leaner to set up than newer build systems I guess), then you can reference things with #include "topLevelPackage123/subPackage.hpp" - i.e. you don't have to reference full paths in your cpp files, if "topLevelPackage123" is in a folder e.g. "packages", the latter of which you added to the "additional include directories" or whatever it may be called in what ever you end up using.
But classes give me the most trouble when doing things this way, due to declaration order dependencies and the general awkwardness of class header files - otherwise I would aim for everything being encapsulated in classes.
Well, but headers are supposed to solve those order dependendies. Even though it's quite a hack Which created a type of acrobatics of trying not to #include headers in other headers as much as possible, which creates extra cruft... (and for C++ standard library stuff, just don't even try to get around it via forward declarations if it's reference in the header, they'll be nigh(?) impossible to get right (without going bonkers))
Ok, it dosn't solve circular dependencies between classes, although I'm not sure how those are necessary. One extra level of abstraction solves that (implementing a common interface). I very faintly remember having had that problem once, it could be there's another way around it, but since I never encountered it again, I don't remember.
(and self references of classes/structs work in C++ without dual namespace tricks like with C structs).
Class dependencies, and the lack of a convenient "read only" access in them, more or less force me to have more "globals" (public state holders) than I would like inside certain classes, which seems kinda perverse.
Could you elaborate on that, maybe with an example, of what you wanted to achieve, what's in your way, and your unfavored way around it?
Right now I'm not sure what this means, but maybe I'm just tired.
Rust interests me too. Though it still has pointers, which I loathe and avoid whenever possible. And "->" syntax always strikes me as clunky for some reason.
Pointers, the unsafe ones require to declare your entire program to be marked unsafe which can have ramifications with where it's allowed to execure, or something like that, right? (I only briefly read about it, and am mapping that onto how it works in C#, which could be wrong)
Well, "->" is two characters for one symbol, with "air" in between, yeah it looks weird. Have you tried the --> operator yet? It's really great, it moves a variable from its initial value towards the value it points to!
operator -->
"Could you elaborate on that, maybe with an example, of what you wanted to achieve, what's in your way, and your unfavored way around it?
Right now I'm not sure what this means, but maybe I'm just tired." - tinkeringdude
Say you have a class object (instance of the class) that holds and manages a lot of info that's germane to several other classes, or maybe even the project as a whole (I'm sure that right there tells you I'm doing it wrong! - but processors sims are like that, with more interconnect than logic). You want the class to be the sole writer of the info, but you want the world (or at least part of it) to see the values of those things. I could be totally wrong, but I believe you have to make the data private, then write a bunch of "helper" functions, one for each piece of data anything external to the class object wants to read. And if that class is within another class there's a whole 'nother layer of helper functions just to pipe the read-only data through to the outside. Along with "public" and "private", they could have implemented "read_only" or similar. The C++ writers seem to be enamored with OO in the abstract, with all sorts of exotic inheritance that I would bet almost no one really uses, to the exclusion of practical stuff like this. I get the whole data hiding thing and how it's generally good, and I hide as much local data as I reasonably can, but IMO the vast majority of the safety comes from local read/write, and non-local read-only access (this is how all HDLs encapsulate state, exactly the way real-life digital components do, an internal signal can instead be defined as an output port signal if anything needs to read it, and bidirectional read/write access is quite rare).
[insert_gif:old man yells at cloud]
Say you have a class object (instance of the class) that holds and manages a lot of info that's germane to several other classes, or maybe even the project as a whole (I'm sure that right there tells you I'm doing it wrong!
Well, there has to be one point where things are orchestrated, at least at startup.
You want the class to be the sole writer of the info, but you want the world (or at least part of it) to see the values of those things. I could be totally wrong, but I believe you have to make the data private, then write a bunch of "helper" functions, one for each piece of data anything external to the class object wants to read. And if that class is within another class there's a whole 'nother layer of helper functions just to pipe the read-only data through to the outside.
Without knowing the details: If that's a lot of info that is closely interrelated, and all managed / write-accessed by one and the same object, then could you not make an own data type -> data object out of it, which is *owned* by your central managing object, and "interested" outside observers have to fetch from, or get assigned by, the manager object, a const reference to the whole data object, via one accessor function? That will prevent, sorta, the observer objects from writing to the data object members.
As far as C/C++ goes, anyway... some naughty cowboy coder might cast the constness away, which kinda undermines the whole thing, thanks to the urge for backward compatibility. Static code analysis should catch that, or since it's entirely your code, you just won't do it, right? (woe is him who needs to call Unix functions which spit at the concept of immutability of data and require read-buffer pointers which are not to const data...)
And I mean e.g. just a plain struct with data members. Or a vector of those or whatever you need. No own functionality - the allmighty manager might as well do initialization, if required - saves the pain of "rule of three" (or now "rule of five"). On that low-ish level and given performance requirements, I see no value to implement accessor functions for every little detail. Which they teach in OO intro courses, and I guess have their place at high levels, where extensibility / implementation agnosticism are the main goals, and often kinda mimick "properties", a construct e.g. C# (and object pascal? *scratches head*) has.
And as I guess cuold be the case in such processor simulation, there will be certain access patterns which have a high likelihood of being very similar very often, right? So it would make sense to replicate that order and proximity in time within memory layout to utilize cache, so if make it so that your data object has mostly "full bodied" data members, not pointers to those, closely replicating typical run-time access order, then when you create that object, it will be one heap allocation with everythin in it just right after another, instead of a lot of fiddly objectlects scattered across the virtual address space - something that following teachings from the days of the dawn of OO may too easily produce.
Maybe something like this:
try that right here (no idea if you could make a header file there, I suspect not, so let's pretend)
Inheritance - once was the proverbial hammer in search of nails, but has fallen out of favor (ok, some teaching institutions may not have heard the call yet) for blind usage as opposed to composition (of objects from sub objects of (other) types). I like inheritance for some things, and it sovles some things in an elegant way - although some of what it solves can now also be done via functions as function arguments (lambdas) in C++, partially awkward to use as they may be (in C++ !). E.g. customizing behavior of more or less generic algorithms with a little code snipped that really is only needed for expressing what you want it to do, instead of the hefty overhead (esp. in C++) of writing a class (that's likely implemented correctly for all circumstances of outside usage).
Especially, one does not need to go crazy with the *deepness* of inheritance hieararchies as was often done in the past enthusiastically... I don't think I ever had a hierarchy deeper than 5, if even that, but some Java stuff managed to enter 2 digits IIRC... that must be one hell to maintain (or understand).
I mostly even dislike inheriting from classes that do something. It someties can seem to save some work... but at some point easily as a bullet with deferred ignition, directed right at own feet. I use it mostly only for interfaces. It can be nice to abstract away the implementation of something which is kinda heavy or not available reliably enough to be run all the time in an automatted test framework, and also, you don't want to test that thing, too, when you actually want to test some other module which uses that heaving thing as a dependency. So that dependee gets passed-in a reference to an abstract base class (all virtual), and doesn know whether it get's the real deal of its dependency, or to an object roughly faking it well enough to fulfill its expectations (bound by defined interface contract) - and makes the test pass. And keeping some of the modules platform-independent also works nicely that way, if the dependee never knows what exact implementation it's going to get - just something that fulfills the interface. May not be so nice on too low a lever either, though, because of the virtual function table overhead.
Thanks for that Tinkeringdude! You're way deeper onto SW than I'll ever be, so it's interesting to hear your observations, criticisms, and particularly your emotions, regarding the field. I suppose the creative process will always have frustrating components, but slogging through the minutiae of language features until you find they don't do what you want (or maybe worse: finding they almost do what you want) is really up there in terms of shit jobs (IMO). I can only take it in small doses, as the path through to success is often quite narrow, poorly documented, and never guaranteed (thank god for the internet and for the kind souls freely offering up their solutions and explanations).
There must be a sweet spot in terms of the basic complexity of programming language syntax and ability. You don't want every programmer reinventing the wheel for, say, strings (shades of C/C++) as that could easily make coming up to speed on some else's code code massively daunting - like almost learning another language. But you don't want to hide the basic stuff from them so hard that they find it difficult to produce a wheel when they need to. I suppose that's where standard libraries come in, providing basic containers and such. I imagine libraries are harder to produce than the language itself. Ignore basic library needs for too long and you end up with Boost and the like. Programming is so new and plastic that the dust is still settling on the basics. It's both a blessing and a curse that it can be so abstracted (some would say unmoored) from the underlying hardware - and even the underlying software!
=============
Great link from hacker News (https://news.ycombinator.com/news) today that I feel captures the field of Theremin development:
http://paulgraham.com/genius.html
Obsessive interest in a narrow subject can take you a long, long way, particularly if you are creating something and have the ability and background. Those who aren't geniuses (me!) can often contribute through sheer determination, though that determination must almost necessarily be driven by deep personal interest. Lord knows there are almost no monetary drivers for Theremin development, which is maybe a good thing for labors of love. No amount of money could have made me care enough to really understand, say, higher order CIC filtering (my poor brain doesn't learn very easily). This is why I had to leave my (pretty good) paying job, I just didn't care enough about what they were working on to really dig into the fundamentals of what was going on (networking is a total mess) so I increasingly felt like something of a hack / imposter, even though my basic digital and analog EE skills were (IMO) constantly improving. Industry is weird, much weirder than I ever imagined it could be. Or maybe it's me. I just always wanted to do music synthesis work.
You must be logged in to post a reply. Please log in or register for a new account.