Stephen Hawking warns artificial intelligence could end mankind

I'm glad someone noticed I wasn't just trolling :)

If we are going to incorporate AI into our lives then that same AI is going to look right back at us too, not just as an intelligent creature to serve but as something to study (we are supposed to be top of the food chain after all)

Just as we have studied what previous generations have considered our maker, God.


What will an AI make of our moral decisions, like do we let one person die if it means we can save lots of people, is it OK to go to war if there is some political benefit. Much of what we do is down to irrational cultural and selfish reasons, something an AI isn't likely to agree with.

This is digression because it doesn't deal with the topic and is far more of a sociological and philosophical debate, but as I stated in another of my previous posts.

The big issue with automation of everyday activities and management of human NEEDS is that we are no longer bound by control from a monetary / material need.

Currently materialism is a major part to keeping society in check. So if I misbehave I lose the ability to earn or own and can't have my iPhone, X-Box, Car, Season Ticket, House. The existentialism of the self is that I need and WANT (I think that makes sense).

If you have all of those things that you WANT and NEED then what moves large groups of people to do things..... Well there are a few other pillars, but religion and philosophy are the big methods of controlling action.
Religion has been used to control or at the least influence action for a long time and free thinking has been around far less when specifically relating to sociology and more importantly society.

So you might argue that free thinking people following essentially the guidance of machines would be opposed to religion or worse might become the manifestation of a religion. We could end up worshiping the machine. Deus Ex Machina.

Again this dips to far into the realms of philosophy, but it is a valid point if you are talking about societies role in regards to automation.
 
Not really a true definition of AI. You can explicitly program the behavior.

An AI algorithm is any algorithm that processes an input and derives an output that is deemed an intelligent process. There is no good definition of intelligence.

your definition is much closer to the machine learning field within AI. This is a big part of AI and is often what people think of but it is not the whole picture. For example in areas such as natural language processing, computational linguistics, computer vision, automated reasoning, knowledge base engineering etc. it is commonly to explicitly program/encode all behaviors by hand. In NLP you explicitly program that "Bob" is a proper noun, and "the" is a definitive article. In CV you explicitly program SIFT feature extraction algorithm and use a Monte Carlo algorithm to track the movements of the object. In automated reasoning you explicitly program the Davis-Putnam algorithm to check validity of a first-order proportional logic formula.

Oh my god you're completely right. My brain is fried.
 
http://www.nickbostrom.com/papers/survey.pdf

Page 6.
41% of AI researches believe we will never build machines that simulate every aspect of human intelligence while 41% believe it will take at least 50 years before we do so.

A high level intelligence machine, which is defined as one that can carry out most human professions at least as well as a typical human, is likely. One that simulates human intelligence, not so much.

I'm surprised as many AI researchers as that think it is possible. Replicating human intelligence however completely irrelevant though. As I said, we cant currently replicate how a bird flies but we can design planes that go faster than the speed of sound, travel around the world without stopping and can carry massive payloads, and we can do all of this without even having to sit in the plane. We want machines that are intelligent, we don;t need a computer replicates accurately a human brain. What you are after is what is called "strong" AI, that is copying mechanisms used in natural system to achieve intelligent behavior. All that engineers care about is called "weak" AI, i.e the actual process is irrelevant but the outcome is important.

Is there a computer model for the nervous system of any animal, no matter how simple, such as those that have hundreds of neurons and thousands of synapses? If not, why so?
Yes, there are computer models of Drosophila , slugs, and other primitive animal CNS. In particular recent research on Depp learning is creating networks with hundred of thousands of neurons that model various neural systems and animal brains.

No, our minds can be linked through empathy or body language, not magic. If you're in a group and you yawn, you automatically trigger a few other yawns. Isolation does interesting things to the mind, which suggests our minds are not as independant as we like to think they are:

http://www.bbc.com/future/story/20140514-how-extreme-isolation-warps-minds

I don't understand what you are trying to get at here. yes, humans communicate and often in implicit forms. So what. Trivial to do that with an artificial system. just because your consciousness is not aware of the communication is irrelevant.

A car is the answer to the question "How do I get to A from B?" , which is why it's easy to build one. What's the question that has the human mind as the answer?

Why is the human mind the answer? We want to create intelligent systems. A human is one possible example, it is not the ultimate goal. We already know human brains are incredibly weak at certain forms of thought processing, there is no desire to replicate everything a human brain does from an engineering PoV, merely scientific curiosity. We could build systems far more intelligent than a human but know barely any more about human brains than we do now.

What makes you think that building machines that solve problems will lead to stumbling upon consciousness at some point?

Firstly, I don't claim that it does, I'm just reporting on the sate of the art form people who research this professionally. Most people in the field believe that consciousness is a natural outcome of a complex system embodied in the physical world that requires a world model to process intelligent behaviors from sensory inputs. They believe that consciousness is actually a continuous property and is emergent from any system that tries to make intelligent decisions. Many seemingly complex processes emerge inherently from the interaction of much simpler deterministic systems.
From Wiki:
"Neuroscience hypothesizes that consciousness is generated by the interoperation of various parts of the brain, called the neural correlates of consciousness or NCC. Proponents of AC believe it is possible to construct machines (e.g., computer systems) that can emulate this NCC interoperation."
 
The people who can't see it happening are stumbling over made up and ill-defined terminology like 'consciousness', 'intelligence', 'life', 'identity', 'personality', 'soul', 'emotion', etc.

If you simply think of human beings as not particularly special, but rather a particularly fortunate strand of chemicals that has been erroneously self-replicating since time immemorial (with all of the terminology above mere concoctions/by-products of the human psyche/condition), i.e. everything derives from the physical; then it is not so difficult to imagine that the same wheels could be set in motion using other elements/methods.
 
Last edited:
if AI did wipe us all out.
Good always prevails over evil no matter how long it takes. Mankind has not even found a cure for the common cold virus let alone build artificial intelligence that would become self aware and destroy human beings, lol hollywood and bad robots.
 
Funnily enough I'm watching a film called Automata that is based on robots who have two protocols:
1) Humans must not be harmed
2) A robot can't self repair or repair another robot

The robots have just broken protocol 2.
 
The people who can't see it happening are stumbling over made up and ill-defined terminology like 'consciousness', 'intelligence', 'life', 'identity', 'personality', 'soul', 'emotion', etc.

If you simply think of human beings as not particularly special, but rather a particularly fortunate strand of chemicals that has been erroneously self-replicating since time immemorial (with all of the terminology above mere concoctions/by-products of the human psyche/condition), i.e. everything derives from the physical; then it is not so difficult to imagine that the same wheels could be set in motion using other elements/methods.

This +1

Watch this space :p
 
Yes, there are computer models of Drosophila , slugs, and other primitive animal CNS. In particular recent research on Depp learning is creating networks with hundred of thousands of neurons that model various neural systems and animal brains.

Do you have a link? I couldn't find a paper on a model that fully simulates a nervous system.

I don't understand what you are trying to get at here. yes, humans communicate and often in implicit forms. So what. Trivial to do that with an artificial system. just because your consciousness is not aware of the communication is irrelevant.

My point was our minds are not independent units. They're not just the result of the code written in the genes along with the impact of the enviroment. Without other minds that shape a child's mind, it does not even achieve consciousness, it only acts on basic instincts and needs. I would go as far as saying that a mind is an amalgamation of other minds - that's why feral children think and act like the animals that helped raise them, despite having a much more powerful brain to work with.


Why is the human mind the answer? We want to create intelligent systems. A human is one possible example, it is not the ultimate goal. We already know human brains are incredibly weak at certain forms of thought processing, there is no desire to replicate everything a human brain does from an engineering PoV, merely scientific curiosity. We could build systems far more intelligent than a human but know barely any more about human brains than we do now.


Firstly, I don't claim that it does, I'm just reporting on the sate of the art form people who research this professionally. Most people in the field believe that consciousness is a natural outcome of a complex system embodied in the physical world that requires a world model to process intelligent behaviors from sensory inputs. They believe that consciousness is actually a continuous property and is emergent from any system that tries to make intelligent decisions. Many seemingly complex processes emerge inherently from the interaction of much simpler deterministic systems.
From Wiki:
"Neuroscience hypothesizes that consciousness is generated by the interoperation of various parts of the brain, called the neural correlates of consciousness or NCC. Proponents of AC believe it is possible to construct machines (e.g., computer systems) that can emulate this NCC interoperation."

This thread is about Hawking's belief that AI may pose a threat to us because "It would take off on its own, and re-design itself at an ever increasing rate." I'm not convinced that any system that makes intelligence choices can achieve consciousness. The inefficient processes that happen in our brains, like the ones you mentioned, along with the influence of fully functional similar minds may be prerequisites for the emergence of consciousness. If this hypothesis is true, than any potential AI must be human like, which leads to many complications, some of which I've mentioned.

Another possibility is that consciousness is an evolutionary adaptation, one that's coded in our genes, in which case understanding it before coding it ourselves is mandatory.

Finally, it could be that any complex system that makes intelligent choices inevitably achieves consciousness at some point, although that would surprise me. Every machine we build, every piece of code we write, they're all extensions of our bodies/brains, no different than a hammer is an extension of the hand. They do things much better than our brains (or hands) but how and why would they suddenly stop doing what we intend them to do?
 
Last edited:
Sadly I don't have time right now to continue but I will put a few brief responses.

Do you have a link? I couldn't find a paper on a model that fully simulates a nervous system.
1)
Arena, P et al, (2010) An insect brain computational model inspired by Drosophila melanogaster: Simulation results. IJCNN, pp.1-8

Bryden, J.A. and Cohen, N. (2004) A simulation model of the locomotion controllers for the nematode Caenorhabditis elegans. Proceedings of the Eighth International Conference on the Simulation of Adaptive Behavior. pp. 183-192.

Mailler, R. et al (2010) A Biologically Accurate 3D Model of the Locomotion of Caenorhabditis Elegans, Biosciences.


There is also a lot of stuff relating to the Blue brain project which I know personally.
http://en.wikipedia.org/wiki/Blue_Brain_Project
They are currently modelling a rat brain and plan to model a human brain by 2023.


You have to realize the modeling a complete CNS is not very interesting scientifically because a vast majority of the brain is doing mundane things like beating the heart, controlling digestion, operating lungs. Scientists typically study a separable neural-functions such as objection recognition, the neuromorphological basis of optical flow in insects, place cells in rat hipppocampus etc. There is just no interest in modelling the complete CNS.

My point was our minds are not independent units. They're not just the result of the code written in the genes along with the impact of the enviroment. Without other minds that shape a child's mind, it does not even achieve consciousness, it only acts on basic instincts and needs. I would go as far as saying that a mind is an amalgamation of other minds - that's why feral children think and act like the animals that helped raise them, despite having a much more powerful brain to work with.
1) "Without other minds that shape a child's mind, it does not even achieve consciousnes" that is a very strong statement, do you have any proof of that as it certainly goes against the mainstream scientific consensus.
2) Feral children soon adapt to relatively normal behaviors. Even if they don't, acting like a human is not a requirement for consciousness.Feral children are just as conscious and self aware as any other human.
3) Interaction with other humans is part of the environment.
4) Your claim is merely that agents must be social to become conscious. Beyond the fact there is no proof of that, it is trivial to make machines social - that was the focus of my PhD. Swarm intelligence, collective systems, distributed learning, imitation learning etc are huge research fields.

This thread is about Hawking's belief that AI may pose a threat to us because "It would take off on its own, and re-design itself at an ever increasing rate." I'm not convinced that any system that makes intelligence choices can achieve consciousness. The inefficient processes that happen in our brains, like the ones you mentioned, along with the influence of fully functional similar minds may be prerequisites for the emergence of consciousness. If this hypothesis is true, than any potential AI must be human like, which leads to many complications, some of which I've mentioned.
1) Hawking's warning does not require artificial consciousness, it may however be a side effect of achieving that.
2) I think You are on your own with your hypothesis really. Do you have any evidence why it is not possible to develop intelligent system without particular human traits? The reason the human brain is often suboptimal is that it requires less resources given existing neurophysiology to use adequate simplifications and is faster to evolve under the given environment. It is often fairly meaningless - humans evolved from much more primitive animals and much of the brain has adapted form our ancestors. Given the current environment much of the evolutionary baggage is really just an inefficient waste. A completely clean fresh design of the human brain could be much smaller and more powerful. A good example is mammalian vision - squids have a far better design for eyes because their evolutionary environment selected for superb vision. Ours didn't so we have fairly bad night vision. There is nothing special about human eyes

Another possibility is that consciousness is an evolutionary adaptation, one that's coded in our genes, in which case understanding it before coding it ourselves is mandatory.

There is no logic in here. For example,object recognition was an evolutionary adaption and is encoded in our genes. However, engineers developed object recognition with no knowledge of how it is done in humans, in fact the opposite occurred. Knowledge of how engineers solved the problem led the way to understanding how the how brain worked. The latter wasn't a requirement.

Finally, it could be that any complex system that makes intelligent choices inevitably achieves consciousness at some point, although that would surprise me. Every machine we build, every piece of code we write, they're all extensions of our bodies/brains, no different than a hammer is an extension of the hand. They do things much better than our brains (or hands) but how and why would they suddenly stop doing what we intend them to do?

I didn't say that all complex systems will achieve consciousness, but complex systems subject to certain conditions - that is typically take to be existing in a physical world about which decisions must be made.

I can also flip your question around, why should a pile of chemicals suddenly become conscious in a human brain?

Tools often stop doing what we intend them to do, my computer does all the time.
 
Back
Top Bottom