Stephen Hawking warns Apple, IBM and Google that robots will spell doom for human!

We are a long way from self evolving AI let alone one that might try to protect itself let alone have the capabilities to do that. Not to say it won't ever be a problem or something we shouldn't be wary of.
 
And other top scientists go the other way.

In Michio Kaku book the future of the mind he talks about this. As ling as they are designed in the correct way, it is not an issue. And I agree. We can put save guards in place.

However like all of science it's good to think and get laws in place well before we are able to do it. So discussion is great at this stage.

Especially as the EU and USA separate brain incisive will almost certainly advance robotic system, as we fond out the inner workings off the mind and how it all works.
 
i've yet to see an ai that is even as intelligent as an ant, and even if we did make a highly intelligent and malevolent ai in a compute what exactly is it going to do that could ever be a threat??? unplug it and its off.

the idea that it could start manufacturing bodies etc is just nonsense - it wouldn't have the means.
 
And other top scientists go the other way.

In Michio Kaku book the future of the mind he talks about this. As ling as they are designed in the correct way, it is not an issue. And I agree. We can put save guards in place.

Like they did in Robocop? that failed :D
 
Isaac Asimov had the AI thing covered years ago.

1.A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2.A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
3.A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
 
We are a long way from self evolving AI let alone one that might try to protect itself let alone have the capabilities to do that. Not to say it won't ever be a problem or something we shouldn't be wary of.

I know what you mean, but if I was to be very pedantic I could say that we have had self evolving AI in a very basic form for ages. I've made an evolutionary algorithm where the evolutionary process alters the parameters of the algorithm itself for a uni assignment, and that just about qualifies as self evolving AI (since it was evolving a control system for a simple virtual robot)

Isaac Asimov had the AI thing covered years ago.

1.A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2.A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
3.A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Presumably you haven't read many of his books, or even seen the film iRobot, because, from what little I have read and watched, I get the impression that a lot of his books are based on the flaws within these rules. They are definitly a good starting point though
 
Presumably you haven't read many of his books, or even seen the film iRobot, because, from what little I have read and watched, I get the impression that a lot of his books are based on the flaws within these rules. They are definitly a good starting point though

Read the book and saw the film. So much for presumption huh ? :rolleyes:

They're a great starting point. Service pack two would probably perfect them. :p
 
I know what you mean, but if I was to be very pedantic I could say that we have had self evolving AI in a very basic form for ages. I've made an evolutionary algorithm where the evolutionary process alters the parameters of the algorithm itself for a uni assignment, and that just about qualifies as self evolving AI (since it was evolving a control system for a simple virtual robot)

Sure the code adjusts parameters within the algorithm and theres stuff like neural nets but nothing that can actually learn to make arbitary changes to its controlling algorithm(s) at will and/or completely change or invent an algorithm itself.
 
Isaac Asimov had the AI thing covered years ago.

1.A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2.A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
3.A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

What happens if the robot has to kill a human to stop another human from killing him? Stack overflow? That's why rule 1 and 2 fail. 3? Protect its existence? I don't like the sound if that at all.
 
Back
Top Bottom