Stephen Hawking warns Apple, IBM and Google that robots will spell doom for human!

Day 1 for me was MATLAB, followed later by Delphi, Miranda and Fortran. Programming is a key part of a Physics degree.

Indeed, I'm a second year Physics student and we've covered a fair amount of C++ and MATLAB, whilst I'm steadily teaching myself C# too. Personally I love the programming part of the course, and I didn't realise I loved coding so much until I came to university - which is why I am signing up for the Intercalated Year in Computer Science for next year :D

That's why we always end up hiring physicists, because they can both think and program cf. Many software engineers who can only program;)

If you don't mind me asking, which company is this? I'm interested in some sort of job which involves programming and physics after I finish my degree :)
 
nN741YB.jpg
 
The risk is incredibly small though, won't happen overnight, is a long way off ever happening, and is easy to defend against for the most part.

I'm not sure how to quantify the risk. I view A.I. as certain - there's nothing in the human consciousness that I can believe is intractable. It's going to take a while - sadly our generation will probably miss all the excitement - but mankind will get there.

Once A.I. is with us as a daily reality, a whole lot of engineering which is presently infeasible becomes rather easier. The risk of the resulting rapid progress having catastrophic consequences is unknown, but I don't think it could be categorised as "incredibly small".

Time will tell. You made an interesting choice for a PhD.
 
Indeed, I'm a second year Physics student and we've covered a fair amount of C++ and MATLAB, whilst I'm steadily teaching myself C# too. Personally I love the programming part of the course, and I didn't realise I loved coding so much until I came to university - which is why I am signing up for the Intercalated Year in Computer Science for next year :D



If you don't mind me asking, which company is this? I'm interested in some sort of job which involves programming and physics after I finish my degree :)

It is a small startup company in the US. We don't do that much that actually requires physics (there is a little aerodynamics work though), we just like masters and pHd grads that are smart, have a numerical/mathematical/statistical degree and can program. We do lots of optimization, search, machine learning work. Clients include US Air Force and navy, my plectrum is related to traffic signal prediction for route optimization, reducing congestion and pollution etc.

If you can program and have a physics degree you will find lots of interesting opertunities, either start ups with interesting problems or large companies with R&D departments. Banking and finance has the money, can be boring but sometimes exciting.
 
I'm not sure how to quantify the risk. I view A.I. as certain - there's nothing in the human consciousness that I can believe is intractable. It's going to take a while - sadly our generation will probably miss all the excitement - but mankind will get there.

Once A.I. is with us as a daily reality, a whole lot of engineering which is presently infeasible becomes rather easier. The risk of the resulting rapid progress having catastrophic consequences is unknown, but I don't think it could be categorised as "incredibly small".

Time will tell. You made an interesting choice for a PhD.

You view AI as some binary system that will "some day be achived". That is flawed thing. Intelligence is spectrum, as is AI. AI systems are already widely used in industry and have been for sometime, far outperforming humans is specialized domain. AI is used in finance, stock trading, networking, logistics, routing, optimization, clinical diagnosis, operations research, robotic surgery, industrial robotics, natural language processing/voice recognition, call centers, data mining, spam blockers, signal processing, work force optimization, automated warehouses, dispatching, engine and turbine design optimization, security, automated CCTV monitoring, bioinformatics, OCR..

In the 1960s the US introduced numerical zip codes to facilitate the first digital sensors and optical character recognition systems using AI.

AI doesn't mean some kind of holy grail super computer that is better than humans at all tasks. AI can be very simpler it doesn't even have to involve a computer (emergent intelligence, physics based intelligence, mechanical/chemical/genetic/optical computation). AI like natural intelligence is a multifaceted spectrum of individual behaviors and processes. Increasing ng complex aggregation and integration of such behaviors leads to higher levels of intelligent systems. This is a slow and gradual process. It is highly unlikely there will any/many large jumps for a variety of reasons, certainly not unknowingly.
Catastrophically terminator style scenarios are incredibly remote. Much more liekly are viruses and things that spread very rapidly, adapt and mutate and cause significant damage (dos attacks etc) and are impossible for security experts to defeat by hand, instead requiring intelligent virus killers that can also adapt and modify to attack the virus - artificial immune systems is a very active research area. This might render network computers I feasible or very expensive, or more likely unreliable which could have severe consequences but they will never be able to launch nuclear weapons- these things just aren't connected to the internet for a reason and are on private networks with many fails safes.


Anyway, things like global warming are much more dangerous. Dangerous warming is pretty much a guarantee now whatever we do due to the half life of co2 in the atmosphere. I am also far more worried about large western outbreaks of Ebola or new variant bird flus and things. The Spanish flue rapidly spread across the whole planet including re toe pacific islands, quickly killing up to 100million people which was around 5% of the planet's population. With today's connected world there are some truly nightmare scenarios. Something serious could eradicate 1-billion people.
 
Last edited:
You forget that R. Giskard came up with the zeroth law that, to an extent, can be used to circumnavigate the 3 laws.

Also that's a bit rich from a bloke with that voice.

And promptly started to die as his brain couldn't cope with the idea :)


Nice to see some Asimov references:)
 
Catastrophically terminator style scenarios are incredibly remote.

I believe it's safe to say such a scenario is impossible for the time being. AI is binary code written in box containing magnetic tape and poses as much threat as a toaster with a digital timer. No matter how much code we write or how complex we make it, nothing will ever "wake up" in those lines of code and start to "think" for itself. Not now, not a million years.

We'll one day understand all the inner workings of the human brain and only then can we hope to replicate our intelligence in a machine. There are hints that quantum physics plays an important role in its functioning which rules out the possibility of ever replicating it using binary code.
 

Well put and something I very much agree with. I think we have far more pressing worries eg the end of the antibiotic area or bird flu than something that is so far removed from present day to be of a concern. Those two especially I see as probably threats in our lifetimes.

Anyway maybe the AI will just end up like the minds in the culture too busy worrying about other stuff.
 
No matter how much code we write or how complex we make it, nothing will ever "wake up" in those lines of code and start to "think" for itself. Not now, not a million years.

Quite a bold statement. Many things throughout history were deemed 'impossible' at the time. Technology evolves and then suddenly the impossible becomes reality. Something like Data's neural net from Star Trek may one day become a reality, and Data was a self aware conscious AI.
 
Biological catastrophe is much more worrying and could literally happen anytime.
We wouldn't have a chance if a viral strain with a 90% fatality rate and vector transmission methods such as airborne and significant resistance suddenly emerged.

Look how devastating the financial crisis was. I dread to think what an actual catastrophe would do

Global warming will cause havoc I'm sure. It will be gradual. Something biologicalmay happen next week or never. PUblic will feel that any new new worthy threats aren't serious after the last two or three news events. It really wouldn't take much for say ebola or something HxNx flu wise to cause devastation
 
Quite a bold statement. Many things throughout history were deemed 'impossible' at the time. Technology evolves and then suddenly the impossible becomes reality. Something like Data's neural net from Star Trek may one day become a reality, and Data was a self aware conscious AI.

Actually, most things throughout history that were deemed impossible are still considered impossible today and while it's true technology often evolves in unexpected ways, we're at a point in history where we have a good understanding of the world around us, something that has never happened before. Educated guesses that are made now are much closer to reality than educated guesses postulated 300, 500 or 1000 years ago.

Right now, all forms of so called artificial intelligence are based on binary algorithms which have mathematical limitations that make them unsuitable to compete with the human brain in certain circumstances. They're great for number crunching but they fall apart when the numbers are too big (Go or Arimaa games). They're great for precise measurement and calculations but totally lack creativity. And so on.

Data can only become reality in a distant future, one in which the logos of Google, Apple and IBM will only be found on ancient archaeological sites.

My point is there's nothing to worry about and not much to even discuss. Computers are toasters that will never be a threat to humans, no matter how many Pi digits they calculate.
 
By logically exterminating us :p

Logically they would surely keep those of us that are worth keeping?

Also, realistically we have to keep progressing with AI as there is too much of a risk of not so well governed regimes doing it anyway. We can't afford to be left behind.
 
Last edited:
Back
Top Bottom