Sadly I don't have time right now to continue but I will put a few brief responses.
Do you have a link? I couldn't find a paper on a model that fully simulates a nervous system.
1)
Arena, P et al, (2010) An insect brain computational model inspired by Drosophila melanogaster: Simulation results. IJCNN, pp.1-8
Bryden, J.A. and Cohen, N. (2004) A simulation model of the locomotion controllers for the nematode Caenorhabditis elegans. Proceedings of the Eighth International Conference on the Simulation of Adaptive Behavior. pp. 183-192.
Mailler, R. et al (2010) A Biologically Accurate 3D Model of the Locomotion of Caenorhabditis Elegans, Biosciences.
There is also a lot of stuff relating to the Blue brain project which I know personally.
http://en.wikipedia.org/wiki/Blue_Brain_Project
They are currently modelling a rat brain and plan to model a human brain by 2023.
You have to realize the modeling a complete CNS is not very interesting scientifically because a vast majority of the brain is doing mundane things like beating the heart, controlling digestion, operating lungs. Scientists typically study a separable neural-functions such as objection recognition, the neuromorphological basis of optical flow in insects, place cells in rat hipppocampus etc. There is just no interest in modelling the complete CNS.
My point was our minds are not independent units. They're not just the result of the code written in the genes along with the impact of the enviroment. Without other minds that shape a child's mind, it does not even achieve consciousness, it only acts on basic instincts and needs. I would go as far as saying that a mind is an amalgamation of other minds - that's why feral children think and act like the animals that helped raise them, despite having a much more powerful brain to work with.
1) "Without other minds that shape a child's mind, it does not even achieve consciousnes" that is a very strong statement, do you have any proof of that as it certainly goes against the mainstream scientific consensus.
2) Feral children soon adapt to relatively normal behaviors. Even if they don't, acting like a human is not a requirement for consciousness.Feral children are just as conscious and self aware as any other human.
3) Interaction with other humans is part of the environment.
4) Your claim is merely that agents must be social to become conscious. Beyond the fact there is no proof of that, it is trivial to make machines social - that was the focus of my PhD. Swarm intelligence, collective systems, distributed learning, imitation learning etc are huge research fields.
This thread is about Hawking's belief that AI may pose a threat to us because "It would take off on its own, and re-design itself at an ever increasing rate." I'm not convinced that any system that makes intelligence choices can achieve consciousness. The inefficient processes that happen in our brains, like the ones you mentioned, along with the influence of fully functional similar minds may be prerequisites for the emergence of consciousness. If this hypothesis is true, than any potential AI must be human like, which leads to many complications, some of which I've mentioned.
1) Hawking's warning does not require artificial consciousness, it may however be a side effect of achieving that.
2) I think You are on your own with your hypothesis really. Do you have any evidence why it is not possible to develop intelligent system without particular human traits? The reason the human brain is often suboptimal is that it requires less resources given existing neurophysiology to use adequate simplifications and is faster to evolve under the given environment. It is often fairly meaningless - humans evolved from much more primitive animals and much of the brain has adapted form our ancestors. Given the current environment much of the evolutionary baggage is really just an inefficient waste. A completely clean fresh design of the human brain could be much smaller and more powerful. A good example is mammalian vision - squids have a far better design for eyes because their evolutionary environment selected for superb vision. Ours didn't so we have fairly bad night vision. There is nothing special about human eyes
Another possibility is that consciousness is an evolutionary adaptation, one that's coded in our genes, in which case understanding it before coding it ourselves is mandatory.
There is no logic in here. For example,object recognition was an evolutionary adaption and is encoded in our genes. However, engineers developed object recognition with no knowledge of how it is done in humans, in fact the opposite occurred. Knowledge of how engineers solved the problem led the way to understanding how the how brain worked. The latter wasn't a requirement.
Finally, it could be that any complex system that makes intelligent choices inevitably achieves consciousness at some point, although that would surprise me. Every machine we build, every piece of code we write, they're all extensions of our bodies/brains, no different than a hammer is an extension of the hand. They do things much better than our brains (or hands) but how and why would they suddenly stop doing what we intend them to do?
I didn't say that all complex systems will achieve consciousness, but complex systems subject to certain conditions - that is typically take to be existing in a physical world about which decisions must be made.
I can also flip your question around, why should a pile of chemicals suddenly become conscious in a human brain?
Tools often stop doing what we intend them to do, my computer does all the time.