Stephen Hawking warns Apple, IBM and Google that robots will spell doom for human!

It's good for profit bad for jobs but a.i capable of doing complex human tasks reliably and safely still seems a thousand years or more off to me.It's like tomorrows world all over we thought we would be in flying cars by now.
 
Isaac Asimov had the AI thing covered years ago.

1.A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2.A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
3.A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

You forget that R. Giskard came up with the zeroth law that, to an extent, can be used to circumnavigate the 3 laws.

Also that's a bit rich from a bloke with that voice.
 
Sure the code adjusts parameters within the algorithm and theres stuff like neural nets but nothing that can actually learn to make arbitary changes to its controlling algorithm(s) at will and/or completely change or invent an algorithm itself.

Thats why i said i was being extremely pedantic. I don't know about anything arbitrarily changing its coding while running, but systems which generate entirely new algorithms to do a certain task are certainly around in a very basic form.

The method i'm thinking of was actually another option for that very same piece of evolutionary computing coursework I mentioned in my other post. I cant remember the name my lecturer gave this system because it was almost two years ago now but, in essence, the algorithm worked by being given the basic building blocks of code, such as if statements, for loops, basic mathematics, etc, and them putting them together. The algorithm would then use an evolutionary process to generate gradually better and better solutions.

For those who don't know how the evolutionary process works, in essence, an initial population of solutions are created (which are essentially random and are almost certainly going to be rubbish). These will then be combined and mutated in various ways. The population of newer and older solutions will be tested and then the population will be culled in such a way that means the best solutions have the most chance of surviving but aren't certain to survive while the worst solutions have the worst chance of surviving but aren't certain to be killed off.

Whether we have a system which can generate a new algorithm analytically, rather than using the evolutionary process to evolve random solutions, I don't know. I certainly can't think of such a system, but that doesn't mean there isn't one
 
Ai wont be on par with humans for a long long time. We will enhance our intelligence with genetic engineering before they come close.

I suppose by that point we won't be human so they will overtake humans but there won't be any humans. We will all be something else.

I do however think that they will be stronger than us in our lifetime. AI and robots will be major advancements sooner than most people think.
 
aren't asimovs laws based on the premise that we as humans, would never build ai without those safeguards.

trouble is, he's possibly underestimated the stupidity of the human race.

i doubt artificial intellegence is going to happen, moving human intellegence into a machine on the other hand...
 
You can tell he is a physics bloke and not a programmer. Kind of ironic him saying that in his voice too.

Nearly all physicists have some level of experience with programming, it's simply necessary these days - can't get a research position without having good knowledge of C/C++ and/or C#, MATLAB.
 
Artificial intelligence stands a decent chance of wiping out humanity. This is not a new concept. This risk is unlikely to prevent the development effort.
 
Nearly all physicists have some level of experience with programming, it's simply necessary these days - can't get a research position without having good knowledge of C/C++ and/or C#, MATLAB.

Day 1 for me was MATLAB, followed later by Delphi, Miranda and Fortran. Programming is a key part of a Physics degree.
 
Thats why i said i was being extremely pedantic. I don't know about anything arbitrarily changing its coding while running, but systems which generate entirely new algorithms to do a certain task are certainly around in a very basic form.

The method i'm thinking of was actually another option for that very same piece of evolutionary computing coursework I mentioned in my other post. I cant remember the name my lecturer gave this system because it was almost two years ago now but, in essence, the algorithm worked by being given the basic building blocks of code, such as if statements, for loops, basic mathematics, etc, and them putting them together. The algorithm would then use an evolutionary process to generate gradually better and better solutions.

For those who don't know how the evolutionary process works, in essence, an initial population of solutions are created (which are essentially random and are almost certainly going to be rubbish). These will then be combined and mutated in various ways. The population of newer and older solutions will be tested and then the population will be culled in such a way that means the best solutions have the most chance of surviving but aren't certain to survive while the worst solutions have the worst chance of surviving but aren't certain to be killed off.

Whether we have a system which can generate a new algorithm analytically, rather than using the evolutionary process to evolve random solutions, I don't know. I certainly can't think of such a system, but that doesn't mean there isn't one

Genetic programming evolves trees of operators and can be Turing complete, I.e. They evolve fully functional algorithms that are capable of computing anything, in theory. It's not actually hard to evolve limited code for most programming languages. Code is just a tree with operators, constants, values etc.

The thing is, a neural network, RBF, CTRNN etc is doing the same thing as any numerical algorithm coded In a programming language, programming languages are designed for humans.

The nice thing about CtRn networks is that they can provably model any real continuous dynamic function, which is basically everything a human can do. The issue is there is no provably mechanism to design the network architecture or parameters and doing so by brute force is NP-hard. Hence the common use of stochastic metaheauristic search algorithms like GAs/PSO/simulated annealing etc.

Yes, I have a PHD in artificial intelligence.
 
Back
Top Bottom