Stephen Hawking warns Apple, IBM and Google that robots will spell doom for human!

Day 1 for me was MATLAB, followed later by Delphi, Miranda and Fortran. Programming is a key part of a Physics degree.

That's why we always end up hiring physicists, because they can both think and program cf. Many software engineers who can only program;)
 
Artificial intelligence stands a decent chance of wiping out humanity. This is not a new concept. This risk is unlikely to prevent the development effort.

The risk is incredibly small though, won't happen overnight, is a long way off ever happening, and is easy to defend against for the most part.

It is easy to prevent such a scenario, the risk comes from future terrorist groups or nations purposely developing damaging AI weapons. We have already seen basic versions, intelligent adaptive worms that have spread widely, even infecting. Unclear power stations in Iran I believe. There is some evidence the US and China have developed and released some advanced intelligent virus.

But anything like this is a long way from ever causing serious issues. E..g some low security computers at a power station might become inoperable but you can't access the main control system from the outside, and we are a long, long way away from having a virus that can intelligently understand the system it is running on and make decisions to cause catastrophic events.
 
Genetic programming evolves trees of operators and can be Turing complete, I.e. They evolve fully functional algorithms that are capable of computing anything, in theory. It's not actually hard to evolve limited code for most programming languages. Code is just a tree with operators, constants, values etc.

The thing is, a neural network, RBF, CTRNN etc is doing the same thing as any numerical algorithm coded In a programming language, programming languages are designed for humans.

The nice thing about CtRn networks is that they can provably model any real continuous dynamic function, which is basically everything a human can do. The issue is there is no provably mechanism to design the network architecture or parameters and doing so by brute force is NP-hard. Hence the common use of stochastic metaheauristic search algorithms like GAs/PSO/simulated annealing etc.

Yes, I have a PHD in artificial intelligence.

Thats a very nice explanation. I was trying to get across a similar point in a way that people without knowledge of the field would understand, but not doing a very god job of it.

Also, as someone who will have an MSc in AI and cybernetics in a month or two (assuming I haven't completely stuffed up this years exams), I consider myself out-degree'd :p
 
I tend to agree with Stephen Hawking. While a lot of people scoff at the idea of a "skynet" I don't think that's where the threat is. AI will beat us simply by being better than us, eventually at everything. If the article is right and it takes a supercomputer around 40 mins to calculate something that the brain can do in 1 sec then according to Moores Law it will be on parity in 17 years. Even if we multiply that severalfold to allow for Moores Law being wrong then it's still not a lot of time.

AI doesn't need to beat us militarily. It simply has to be better than us for humans to start using them in place of other people. If it's more efficient that a person at a job then the person will be redundant.

Eventually we simply won't need many people as AI can do it all. What do humans do then? With no jobs society will be radically different.

Tie that in with self improving and self replicating computers (3D printers) and we're in trouble.
 
As long as we don't let AI anywhere near the internet, instead keeping it on a separate network, we shall be fine, for the most part.

However humanity sucks, so i don't see how this wouldn't end in some sort of serious disaster.
 
As long as we don't let AI anywhere near the internet, instead keeping it on a separate network, we shall be fine, for the most part.

However humanity sucks, so i don't see how this wouldn't end in some sort of serious disaster.

This is just one of a series of 'impending disasters' though, through our own 'ingenuity'.

First we had Gun Powder.
Then Nuclear capabilities.
Next we will have engineered viruses/diseases, Nanobots, automatically driven and robotically controlled cars, potentially the ability to control some fundamental particles (LHC), bacterial immunity to antibiotics, etc.

Despite the best protestations, artificial intelligence can never truly beat humanity, as we have genuine emotions, biological processes/processing, and the ability to say things without uttering a word. Plus, we can be chaotically random which could be nigh impossible to predict in a logical process.
 
This might be a separate discussion, but automation (not AI) is starting to take over human roles. We have home (online) shopping which was science fiction 20 years ago. Computers can mail-merge, run assembly lines, help out in surgery, operate paperless office, hold video conferences, handle money electronically, and so on.

This have allowed the workforce to drop from a generally over 40-hour week to a 37 to 40-hour week, plus have more leave entitlement. Another trend I've seen from my recent jobhunting is that a lot of vacancies are now part-time. E.g. some of my colleagues where I work now (customer services) are on a 4-day week and they're still on like £17k. I think this will be the way forwards as we become more automated.
 
Despite the best protestations, artificial intelligence can never truly beat humanity, as we have genuine emotions, biological processes/processing, and the ability to say things without uttering a word. Plus, we can be chaotically random which could be nigh impossible to predict in a logical process.
Since there's no such thing as random or even free will, I cannot see why the human brain cannot be emulated in terms of function.
What are our thoughts if not a logical process? No matter how illogic it might seem, logic always plays a part in how we arrive at any decision or thought though electrochemicals. Just switch nerve impulses for bits :cool:

OK so I've oversimplified things :)
 
This might be a separate discussion, but automation (not AI) is starting to take over human roles. We have home (online) shopping which was science fiction 20 years ago. Computers can mail-merge, run assembly lines, help out in surgery, operate paperless office, hold video conferences, handle money electronically, and so on.

This have allowed the workforce to drop from a generally over 40-hour week to a 37 to 40-hour week, plus have more leave entitlement. Another trend I've seen from my recent jobhunting is that a lot of vacancies are now part-time. E.g. some of my colleagues where I work now (customer services) are on a 4-day week and they're still on like £17k. I think this will be the way forwards as we become more automated.

The class divide widens more?

The increase in 0-hr contracts and classing people as 'employed' for Governmental statistics massaging (as well as volunteers classed as employed too) is doing nobody any good, except for those who believe the Media and Government.

Within Computer Security, Cyber Forensics and Cyber Security is the next big thing which all Western Governments are throwing huge sums of money at, but very soon, with code/malware able to use and modify encryption/decryption algorithms faster than us, we will find ourselves out of jobs soon enough too.

Machines and their materials are expensive currently, but things have a habit of getting cheaper - they don't need rest, pensions, sickness; they don't argue or go on strike, take meal-breaks etc. You've seen how eager big companies were to outsource to developing countries to save a few ££, imagine the prospect of machines doing most of the work? All the benefits, none of the weaknesses.
 
Back
Top Bottom