Well, seeing as neuroscience hasn't really progressed to the point of understanding how the brain processes information, or indeed how the brain encodes information in its electrical firing, such an ambitious aim of bona fide AI is a LONG way off. This is despite the massive amounts of money pumped into this area of science over the last 15 years or so (the 90's was known as the decade of the brain).
Besides which, the computational power of the human brain far exceeds supercomputers at the moment by many orders of magnitude; the array of supercomputers at IBM used for big blue are now being put to use Markram et al. in the blue brain project
http://en.wikipedia.org/wiki/Blue_Brain. The idea is to simulate a small functional unit of the rat brain of around 10,000 neurons (10^4). The simulations are not done in real-time. For a comparison the human brain is thought to contain approximately 10^11 neurons.
Interesting attempts have been made to simulate how the brain computes, mostly in theoretical psychology in the study of how the brain processes language by using artificial neural networks. Even after many years of study this remains unclear, and it has not been decisively concluded whether the brain uses symbolic (eg a particular set of neurons) or distributed processing to represent words, semantics and so on.
Current artificial neural networks usually use 'point neuron' models (McCulloch and Pitts 1943
http://www.cns.bu.edu/~guenther/encyclopedia.pdf) which assume that a neuron can be simplified to a single point in space which receives electrical inputs, both positive and negative, over a set time period. If in that time frame a threshold is reached then the neuron 'fires'; it sends a signal to the neurons downstream of it in the network. This simplification is massive, and ignores much of the complexity in terms of how the neurons structure contributes to its processing of inputs and logical operations.
Donald Hebb (1949) introduced the idea of a learning mechanism for neurons - proposing the idea that if this threshold had been reached then the neurons had contributed to causing it to fire would have a greater effect in the future - their connection to the neuron would be strengthened, whilst those that had not would be weakened. this allows neurons to organise themselves to form associations to certains inputs.
For a review of the philosophical ideas behind AI look at strong and weak AI. Strong AI is the idea that machines can be conscious, weak AI that algorithmic processes could simulate human thinking whilst not actually being conscious - see Alan Turing's Turing test, and philosophers such as John Searle for an argument against strong AI (the Chinese