Actually, as fascinating as the subject is, I don't totally care. But I am reading How Brains Think, by William H. Calvin, and not just for my neuroscientific education. I have zeroed in on a broad research area for my MS, and it has to do with thinking. So, I am learning evolutionary explanations for how this pile of meat feels conscious and how learning works. It is challenging me, but I don't mind. So Nick, since you lent me this book 5 years ago or so, I hope you are happy to know that I got around to this eventually.
My research topic revolves around learning and thinking. I had been chewing over topics in artificial intelligence, and came up with it walking down the hill after a lecture by the resident network security professor at USU, Dr. Erbacher. Basically, the department is trotting out all the professors to talk about their research. I have been listening carefully to all of them to see if they like to think about what I want to study.
So far the closest to my interests has probably been the lovable, a bit wild-eyed professor Hugo de Garis, who wants to bring about Ray Kurzweil's Singularity and phase out the human race in favor of superintelligent machines. He makes artificial brains by evolving neural networks at hardware speeds, using FPGAs, and then downloads them to PCs that can run the programs. He is making a mine-seeking robot dog for DARPA, but he is mostly just interested in the brains. He gave a spellbinding, wide-ranging lecture on his research and the coming Singularity (when artificial intelligence will be evolving too fast for us to predict the future of the technology or even [horror movie organ interlude] the human race).
My topic was inspired indirectly by Dr. Erbacher though. He visualizes network traffic by changing connection logs and data flow into pictures. This makes it easier to spot patterns and detect network attacks. His lecture was excellent too, but one thing bothered me: his visualizations depend on a human looking at the picture and interpreting it (as far as I understood him). I thought afterwards about whether computers can interpret things, even things they've never seen before. I came around to the idea that computers should be able to create abstractions, but can't.
I think I talked about this earlier, but the trend in the way we interact with computers is toward higher-level abstractions. You used to have to speak 1s and 0s to work with a computer; as time has gone on, we have created ways to summarize a lot of those 1s and 0s when we talk to computers. Among many other things, this trend makes it easier over time to create useful software and information.
Unfortunately, it has up to now been all on people to pull the computers into intelligence. Even learning approaches to artificial intelligence, which have shown some great success, have run into a ceiling, summarized as follows by Stuart Russell and Peter Norvig in Artificial Intelligence: A Modern Approach.
Very powerful logical and statistical techniques have been developed that can cope with quite large problems, often reaching or exceeding human capabilities in the identification of predictive patterns defined on a given vocabulary. On the other hand, machine learning has made very little progress on the important problem of constructing new representations at levels of abstraction higher than the input vocabulary.
In other words, computers are not creative in the sense that they can find patterns in disparate ideas, name them, and reuse them to think about a problem in a simpler manner. If we want computers to have a knowledge base to draw on to solve problems, we have to put it all in by hand.
What would a computer do if it could make up words? Learn like a person, but voraciously, at DSL speeds, by reading the Wikipedia? Reveal hidden mysteries? Talk back?
Anyway, I think that is what I want to do for the next two years. That's why I'm reading How Brains Think: I actually need to know, in order to make a computer create abstractions. Was it safe to unleash me on this problem? God alone knows.
That is, if any of the professors will take me.