De Grasse Tyson Takes on Luddites

Computers couldn't beat professional Go players until now:
http://www.bbc.co.uk/news/technology-35420579

Artificial and natural intelligence is a very challenging field of study.
Even simple creatures like bees are not well understood.
http://www.bbc.co.uk/news/technology-35420579

There are more possible positions in Go than atoms in the universe, according to DeepMind's team.

"Many of the best programmers in the world were asked last year how long it would take for a program to beat a top professional, and most of them were predicting 10-plus years," Mr Hassabis said.

The pace of development in learning systems, neural networks and the likes is rapidily increasing, with scientists building increasing complex networks.

on the Singularity

http://www.popsci.com/blog-network/zero-moment/end-ai-singularity-sci-fis-faith-based-initiative

It's a kind of faith. It's fiction. It's been predicted already many, many times, as early as the 70s. It's no closer today than it was then.
That's simply not true, in the 70s AI was pretty much theoretical because no hardware existed that could run even the most basic system. But that is increasingly changing in recent years.
 
That's simply not true, in the 70s AI was pretty much theoretical because no hardware existed that could run even the most basic system. But that is increasingly changing in recent years.

All a computer program can do is follow the instructions its human designer gave it with absolutely no deviation possible.

If the program "mutates" by unintended bit-flipping, you normally have either a fatal crash or data corruption.

Modern hardware can run a program much, much quicker than in years gone by. But all it is doing is repeatedly adding numbers together.

It is not, nor can it be with this paradigm, sentient.

If you wanted to create a sentient machine you'd first have to come up with a new model that doesn't involve a programming language.

Because a machine that is programmed is never going to be sentient. It does not matter how fast your machine can execute the code.

Code is not sentience. All you can do is impart some of your (human) understanding into the code, in the form of a set of rules, which the machine must obey.

A programmed machine makes no choices. It evaluates code and data, and branches down pre-determined paths.

You can go as fast as you want in the next 100 years, but if we are still programming machines as we do today, the chances of the "singularity" happening are as non-existent as they are today.
 
on the Singularity

http://www.popsci.com/blog-network/zero-moment/end-ai-singularity-sci-fis-faith-based-initiative

It's a kind of faith. It's fiction. It's been predicted already many, many times, as early as the 70s. It's no closer today than it was then.

Except you're probably wrong. People far smarter than you or me believe it will happen eventually and we're already approaching the point where we have enough computing power to replicate something as complex as the human brain.

Here's an excellent (and very long) article on the subject: http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html
 
And you tithing that's any different to being human and sentient. You dint think we come born with programming which then learns.

Ai can learn and make choices these days, far more than it could a few decades ago.
 
All a computer program can do is follow the instructions its human designer gave it with absolutely no deviation possible.

If the program "mutates" by unintended bit-flipping, you normally have either a fatal crash or data corruption.

Modern hardware can run a program much, much quicker than in years gone by. But all it is doing is repeatedly adding numbers together.

It is not, nor can it be with this paradigm, sentient.

If you wanted to create a sentient machine you'd first have to come up with a new model that doesn't involve a programming language.

Because a machine that is programmed is never going to be sentient. It does not matter how fast your machine can execute the code.

Code is not sentience. All you can do is impart some of your (human) understanding into the code, in the form of a set of rules, which the machine must obey.

A programmed machine makes no choices. It evaluates code and data, and branches down pre-determined paths.

You can go as fast as you want in the next 100 years, but if we are still programming machines as we do today, the chances of the "singularity" happening are as non-existent as they are today.
Not all AI are programmed...

http://news.discovery.com/tech/robotics/brain-dish-flies-plane-041022.htm
 
All a computer program can do is follow the instructions its human designer gave it with absolutely no deviation possible.

.

This,

All computers can (in theory anyway) be represented by a mechanical system A Babbage engine if you like.

It doesn't matter how sophisticated ones software learning algorithm might be, its operation can ultimately be reduced (Conceptually) to a system of totally predictable cogs, ratchets and gears.

The big burning question to my mind isn't about whether Machines can ever be sentient. It is whether or not We are!

Are human brains (Or indeed any animals brain) something that could be represented by an oversized gearbox. or is there something else going on (Say Quantum effects) that give biological systems a degree of uncertainly that allows for the possibility of genuinely novel and unpredictable behaviour?

:/
 
Except you're probably wrong. People far smarter than you or me believe it will happen eventually and we're already approaching the point where we have enough computing power to replicate something as complex as the human brain.

Here's an excellent (and very long) article on the subject: http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html

I read the other day that researchers in Google believes the "singularity" will happen before 2029.

No doubt people much smarter than I am. However, I very much hope to live to 2029, and I have no doubt whatsoever that this prediction will prove to be just as flawed as all the predictions that have come before it.

I do not believe in AI, at all.

When we look at a great painting, we don't ascribe intelligence to the painting.

And when looking at great code, we shouldn't ascribe intelligence to the machine running that code. The intelligence belongs to the programmer, not the machine.

"AI" is a sleight of hand, in other words.

As for the biological neural networks... well I have no idea what is possible there. But I will talk about one claim, above:

we're already approaching the point where we have enough computing power to replicate something as complex as the human brain.

Did you know that the human brain is still <1% understood? I was watching an interview with a leading consultant neurosurgeon on BBC Hard Talk the other night. You can probably find it on iPlayer.

He was very frank throughout the interview. He was a humble man too. He said that our entire knowledge of the human brain is a drop in the ocean (paraphrasing).

Yet you make the claim that we are close to having the processing power necessary to simulate a human brain?

Surely if we don't understand the human brain, the knowing how much processing power would be necessary to simulate one is completely unknowable?

Please, answer me that.
 
Yet you make the claim that we are close to having the processing power necessary to simulate a human brain?

Surely if we don't understand the human brain, the knowing how much processing power would be necessary to simulate one is completely unknowable?

Please, answer me that.
You might not know how a neural network works, but you can absolutely determine the computing power required to simulate it. And if you can then simulate it, you can then learn a lot more about how it operates.

You also keep talking about programming, it's entirely possible to built hardware level AI in silicon. It's already been done.
 
You might not know how a neural network works, but you can absolutely determine the computing power required to simulate it. And if you can then simulate it, you can then learn a lot more about how it operates.

You also keep talking about programming, it's entirely possible to built hardware level AI in silicon. It's already been done.

How well can you simulate /anything/ that you don't understand? Do you think Flight Sims would be better or worse if we didn't understand the principles of lift and drag?

And which needs more processing power; the flight sim that simulates everything accurately, or the sim which approximates or abstracts?

And what is hardware if not a form of programming that is unchangeable?

If you built a circuit you have a kind of program, but one that needs no software component.

The circuit cannot re-arrange its components to perform a different task. It only does exactly what its designer created it to do. For all inputs, the outputs are known.

I'm really not sure what you're getting at there, perhaps you could explain.
 
True, but i think the singularity side is based on what happens when theres nothing left that a human can do that a robot cant, that we will reach the limits of humanity, at which point we'll have been made obsolete by our own creations.

It'll be interesting to see if a robot religion could ever develop, given they'll plainly know who made them and why.

The idea of the asimov's laws for example are meant to try and prevent us ever being ousted by our creations, but as the stock of robot ai takeover films throughout the years shows if anyone would be stupid enough to build something that could wipe out their species it's humanity

We've only just built a computer than can beat a human at an old Board Game (Go) so we are magnitudes of scale away from getting anywhere near replicating the power of the human brain.

There's more chance that Bears will evolve human intelligence (and take over the world) than AI getting anywhere near what we are capable of, and the time taken for that to happen will be long the human brain would have evolved further and you'll forever be playing catch-up.

It's called AI for a reason, it's artificial, fake, emulated, an illusion of actual intelligence and will never be a substitute for actual biological brain power.

The films you reference aren't documentaries, they are fantasy stories.....a bit like the Bible ;)
 
There's more chance that Bears will evolve human intelligence (and take over the world) than AI getting anywhere near what we are capable of, and the time taken for that to happen will be long the human brain would have evolved further and you'll forever be playing catch-up.

Is the processing power of a bear's brain increasing at a similar rate to Moore's law?
 
I think you should learn about neurons and how they work.

http://arstechnica.co.uk/informatio...he-singularity-is-not-just-around-the-corner/

Human brain cells are absolutely nothing like an electronic circuit.

We don't know how they work. It's your side that thinks we understand the brain.

There are two main problems for any brain simulator. The first is that the human brain is extraordinarily complex, with around 100 billion neurons and 1,000 trillion synaptic interconnections. None of this is digital: it depends on electrochemical signalling with inter-related timing and analogue components--molecular and biological machinery that we are only just starting to understand.

This yawning chasm of understanding leads to the second big problem: there is no accepted theory of mind that describes what “thought” actually is. Even much simpler brains remain mysterious: the landmark success to date for Blue Brain, reported this year, has been a small 30,000 neuron section of a rat brain that replicates signals seen in living rodents. 30,000 is just a tiny fraction of a complete mammalian brain, and as the number of neurons and interconnecting synapses increases, so the simulation becomes exponentially more complex—and exponentially beyond our current technological reach.

This underlying quandary—attempting to define “thought”—is sometimes referred to as the hard problem, and the results of understanding it called strong AI. People engaged in commercial AI remain sceptical that it will be resolved any time soon, or that it is necessary or even desirable for practical benefits. There is no doubt that artificial intelligences are beginning to do very meaningful work, and that the speed of change of technology will continue to shunt things along, but full-blown sentience still seems very far-fetched.

Let me repeat this bit:

"The first is that the human brain is extraordinarily complex, with around 100 billion neurons and 1,000 trillion synaptic interconnections. None of this is digital: it depends on electrochemical signalling with inter-related timing and analogue components--molecular and biological machinery that we are only just starting to understand."
 
By the point at which singularity takes place (if it does) we'll surely be very intertwined with bio tech and probably be part machine ourselves. So all of our physical abilities will be somewhat better than they are now.

Why would there not be room for both us and AI?

I actually don't think it's a bad thing if the human population isn't that big. Most of us don't provide much value to the universe/other people's lives that will last generations so really we're just wasting resources.

Perhaps AI will be able to work out if there is a creator/other aliens and answer some of the questions that would take us ages to work out? Maybe they'll transform us (and themselves) into something truly epic.
 
http://arstechnica.co.uk/informatio...he-singularity-is-not-just-around-the-corner/

Human brain cells are absolutely nothing like an electronic circuit.

We don't know how they work. It's your side that thinks we understand the brain.



Let me repeat this bit:

"The first is that the human brain is extraordinarily complex, with around 100 billion neurons and 1,000 trillion synaptic interconnections. None of this is digital: it depends on electrochemical signalling with inter-related timing and analogue components--molecular and biological machinery that we are only just starting to understand."
I studied AI and neural networks at university by the way. :)
 
There's nothing stopping us building AI out of biological neurons, and it wouldn't need to waste a lot of the capacity in maintaining the rest of a body for a start either.

Isn't that the premise of Bladerunner? They're androids but as far as i understand it almost completely biological.
 
Isn't that the premise of Bladerunner? They're androids but as far as i understand it almost completely biological.
Ethical issues aside, the merging of genetic engineering, biotechnology, nanotechnology, cybernetics and biomechanics and organic neural networks is likely to result in quite astounding technological progress in the latter half of this century.
 
I studied AI and neural networks at university by the way. :)

I don't doubt you. I, on the other hand, flunked out of uni. No doubt you win on that score!

But when prominent neurosurgeons are giving TV interviews and saying we don't understand how the brain works...

Perhaps you should be giving interviews to let everyone know we /do/ now understand it?

:p
 
But when prominent neurosurgeons are giving TV interviews and saying we don't understand how the brain works...

Perhaps you should be giving interviews to let everyone know we /do/ now understand it?

:p
Neurosurgeons aren't the people you need to be speaking to about how the brain works in terms of neural networks.
 
Back
Top Bottom