Very interesting article published inTechnolgy Review, written by David Gelernter (professor of computer science at Yale University) about the understanding of AI and it’s future.
Credit: Eric Joyner
Artificial intelligence has been obsessed with several questions from the start: Can we build a mind out of software? If not, why not? If so, what kind of mind are we talking about? A conscious mind? Or an unconscious intelligence that seems to think but experiences nothing and has no inner mental life? These questions are central to our view of computers and how far they can go, of computation and its ultimate meaning–and of the mind and how it works.
The cognitive spectrum suggests that analogies are created by shared emotion–the linking of two thoughts with shared or similar emotional content.
To build a simulated unconscious mind, we don’t need a computer with real emotions; simulated emotions will do. Achieving them will be hard. So will representing memories (with all their complex “multi-media” data).
But if we take the route Turing hinted at back in 1950, if we forget about consciousness and concentrate on the process of thought, there’s every reason to believe that we can get AI back on track–and that AI can produce powerful software and show us important things about the human mind.
I’m not yet sure if I’m a “cognitivists” or an “anticognitivists” but pattern (as described by Jeff Hawkins) used by our brain to understand information can, in my point of view, be reproduced in a computed way.
Although I’m not so much a fan of creating another intelligence like humans that would not be human, I think technology is here to surprise us…
And as long as we prevent this new intelligence to understand how stupid we are (war, climate, …) we might be risk free from it, other way I will surely destroy us to save the world (cf teminator) :p