If you've never read any Hans Moravec, this is a nice paper from 1998:
ftp://io.usp.br/los/IOF257/moravec.pdf
I personally can't get enough of this stuff (AI, the "singularity", etc.) and Hans is a great writer and active in the field. Lots of back of the envelope comparisons between the various computing structures found in nature and as developed by humans. Where else are you going to find charts containing monkeys, whales, bacteria, and IBM mainframes simultaneously?
His observations regarding the decades-long stasis in terms of computing power readily available to the researcher are sadly hilarious. One quote in particular rings true:
"The ratio of memory to speed has remained constant during computing history. The earliest electronic computers had a few thousand bytes of memory and could do a few thousand calculations per second. Medium computers of 1980 had a million bytes of memory and did a million calculations per second. Supercomputers in 1990 did a billion calculations per second and had a billion bytes of memory. The latest, greatest supercomputers can do a trillion calculations per second and can have a trillion bytes of memory. Dividing memory by speed defines a "time constant," roughly how long it takes the computer to run once through its memory. One megabyte per MIPS gives one second, a nice human interval. Machines with less memory for their speed, typically new models, seem fast, but unnecessarily limited to small programs. Models with more memory for their speed, often ones reaching the end of their run, can handle larger programs, but unpleasantly slowly."
I've been assembling PCs since the 286 days, and unless memory was scrimped on in the first place, I've never found myself actually adding to or upgrading the memory. If one installs enough to handle things comfortably at build time that's generally sufficient to see one thorough to the next upgrade, where processor and memory and all the support chips hit the dumpster for whatever new came down the pike. So I've never really understood why the memory wasn't fully integrated into the processor? I mean, why place this slow troublesome bottleneck connection in the critical path that then needs to be sped up via caching and other awkward means? Maybe it's a fab thing, but in many ways modern processors don't make very much sense.