De Grasse Tyson Takes on Luddites

But in one case we are 100% responsible for the program, and /precisely/ define the outcomes of all branches of that program. The program can be reviewed and analysed by anybody who can read code.

Nope, that is the entire point of AI systems, the develop is not 100% responsible for the program and the outcomes is anything but precisely defined. The program may not be readable by anyone, analysis can be extremely complex.

Here is an example. The following are weights and activation threshold for a very small neural network. All of these numbers were derived through an entirely stochastic process using a metaheauristic search techniques within an unsupervised learning system. There are 10 stimuli inputs into this neural network which controls a robot, and 3 outputs to control left right and speed. The 10 input values are brightness levels (0255) form 10 evenly spaced locations around a 360* camera. Now how does it work?

Now imagine the same table that is hundreds of thousand of columns and hundreds of thousands of rows, and the values change all the time as the system learns

Code:
11164 36318 75061 37674 26320 75100 10431 20418 19228 91792 
21215 91791 76831 58678 87054 31687 93205 43685 19732 08468 
10438 44482 66558 37649 08882 90870 12462 41810 01806 02977 
36792 26236 33266 66583 60881 97395 20461 36742 02852 50564 
73944 04773 12032 51414 82384 38370 00249 80709 72605 67497 
49563 12872 14063 93104 78483 72717 68714 18048 25005 04151 
64208 48237 41701 73117 33242 42314 83049 21933 92813 04763 
51486 72875 38605 29341 80749 80151 33835 52602 79147 08868 
99756 26360 64516 17971 48478 09610 04638 17141 09227 10606 
71325 55217 13015 72907 00431 45117 33827 92873 02953 85474 
65285 97198 12138 53010 94601 15838 16805 61004 43516 17020 
17264 57327 38224 29301 31381 38109 34976 65692 98566 29550 
95639 99754 31199 92558 68368 04985 51092 37780 40261 14479 
61555 76404 86210 11808 12841 45147 97438 60022 12645 62000 
78137 98768 04689

On the other hand, the "program" you refer to with a developing brain is neither fully understood, nor fully under our control.
And a very large neural network from a deep learning system is also not fully understood and there was also no control in how it developed or develops.

How can they not be different situations?
Why are they different?
What happens if we do fully understand the human brain down to every single neuron, will that instantaneously make the human brain some how in a different class?

Ie, you can scan a human brain all you like, but you can't print off a precise program predicting exactly what we will say and do next.

We can predict very accurately for a lot of event though. The human brain is very deterministic, you feed it a stimuli and it responds. You display a triangle and we know what neurons in your brain will fire to process those edges. Flash up a photos of a snake and we know what neurons will fire in response. When someone sees a photos of a friends face we know exactly what parts of the brain are operating, much of how the brain is recognizing the face, what neurons will liekly fire, what other parts of the brain will become active.
That is about as much as we know from very large deep learning computer network.

A computer system with software is entirely predictable.
No its not, and quite trivially so. Very easy to provide a stochastic behavior. And even without that trick this is actually one of the biggest weaknesses in a lot of machine learning. You give an AI system a novel stimulus and you can't be sure at all what it will do, over-fitting and under-sampled data are the biggest challenges. Humans are actually much more predictable in these cases, that is partly why we are so intelligent. Humans have a veyr stable response, AI system care currently very fragile and unpredictable.

This is the major technical challenge in autonomous driving, under normal conditions the cars drive as well or better than humans. You push some parameters outside its learned envelope and it may fail catastrophically in completely unpredictable ways.
 
Last edited:
Thanks for that bit about the TRNG. Doesn't in any way answer my question, however.

I can write code which makes no use of the RNG (most systems that aren't games don't want too much randomness in them, because by and large randomness is not very helpful. Yes, it's good for encryption).

I can run it a million times on a set of data and it performs /exactly/ what I've told it to perform, every time. I control the machine very precisely, and for any given input I can tell you what the output will be, exactly.

The "program" in a living brain is not the same, at all. Neither do I fully understand it, nor am I solely/fully responsible for it, nor can I even be sure what it is at any point in the brain's development.

Why is this so hard?
 
It did, you can build hardware RNG into a system to introduce true randomness if you want.

In computing, a hardware random number generator (TRNG, True Random Number Generator) is a device that generates random numbers from a physical process, rather than a computer program. Such devices are often based on microscopic phenomena that generate low-level, statistically random "noise" signals, such as thermal noise, the photoelectric effect, and other quantum phenomena. These processes are, in theory, completely unpredictable, and the theory's assertions of unpredictability are subject to experimental test.

But the randomness is "physical process, rather than a computer program". Which is why I don't think it answers the question?

Thanks for that bit about the TRNG. Doesn't in any way answer my question, however.

I can write code which makes no use of the RNG (most systems that aren't games don't want too much randomness in them, because by and large randomness is not very helpful. Yes, it's good for encryption).

I can run it a million times on a set of data and it performs /exactly/ what I've told it to perform, every time. I control the machine very precisely, and for any given input I can tell you what the output will be, exactly.

The "program" in a living brain is not the same, at all. Neither do I fully understand it, nor am I solely/fully responsible for it, nor can I even be sure what it is at any point in the brain's development.

Why is this so hard?

Isn't singularity the point at which programs are no longer doing just what we tell them to do?

Also, you can track brain activity, we just don't know enough about it yet, from what I understand.

I seem to recall reading that some trading generated algorithms are so complex that people don't understand them any more.
 
Last edited:
Nope, that is the entire point of AU systems, the develop is not 100% responsible for the program and the outcomes is anything but precisely defined. The program may not be readable by anyone, analysis can be extremely complex.

here is an example. Tye following are weights and activation threshold for a very small neural network. All of these numbers were derived through an entirely stochastic process using a metaheaurist search techniques within an unsupervised learning system. There are 10 stimuli inputs into this neural network which controls a robot, and 3 outputs to control left right ans speed. Now how does it work?

Now imagine the am table that is hundreds of thousand of columns and hundreds of thousands of rows.

Code:
11164 36318 75061 37674 26320 75100 10431 20418 19228 91792 
21215 91791 76831 58678 87054 31687 93205 43685 19732 08468 
10438 44482 66558 37649 08882 90870 12462 41810 01806 02977 
36792 26236 33266 66583 60881 97395 20461 36742 02852 50564 
73944 04773 12032 51414 82384 38370 00249 80709 72605 67497 
49563 12872 14063 93104 78483 72717 68714 18048 25005 04151 
64208 48237 41701 73117 33242 42314 83049 21933 92813 04763 
51486 72875 38605 29341 80749 80151 33835 52602 79147 08868 
99756 26360 64516 17971 48478 09610 04638 17141 09227 10606 
71325 55217 13015 72907 00431 45117 33827 92873 02953 85474 
65285 97198 12138 53010 94601 15838 16805 61004 43516 17020 
17264 57327 38224 29301 31381 38109 34976 65692 98566 29550 
95639 99754 31199 92558 68368 04985 51092 37780 40261 14479 
61555 76404 86210 11808 12841 45147 97438 60022 12645 62000 
78137 98768 04689


And a very large neural network from a deep learning system is also not fully understood and there was also no control in how it developed or develops.


Why are they different?

I also can't look at an assembled program and tell you what it does in five mins flat ;)

Is that data not generated by a program written in conventional code? Is that data random or is it an intermediate result, the equivalent of compiled code?
 
I also can't look at an assembled program and tell you what it does in five mins flat ;)

Is that data not generated by a program written in conventional code? Is that data random or is it an intermediate result, the equivalent of compiled code?

Those are neural network weights, much the same as action potential thresholds in a biological neuron.
 
Those are neural network weights, much the same as action potential thresholds in a biological neuron.

I'm assuming they aren't random, because no system will perform a useful action with purely random data, to my knowledge.

So they must be generated by something.
 
I would disagree, the program can generate a random number by requesting it from a hardware generator.

Thinking about it, can we actually generate a random number ourselves? It'll subconsciously be based on something surely?

I guess it can get quite philosophical; what is random? Is anything actually random?

Does anything actually exist? Are we all actually an experiment of super intelligent AI?

I'm going to bed :o

I quote wiki again:

It is generally accepted that there exist three mechanisms responsible for (apparently) random behavior in systems:

Randomness coming from the environment (for example, Brownian motion, but also hardware random number generators)
Randomness coming from the initial conditions. This aspect is studied by chaos theory and is observed in systems whose behavior is very sensitive to small variations in initial conditions (such as pachinko machines and dice).
Randomness intrinsically generated by the system. This is also called pseudorandomness and is the kind used in pseudo-random number generators. There are many algorithms (based on arithmetics or cellular automaton) to generate pseudorandom numbers. The behavior of the system can be determined by knowing the seed state and the algorithm used. These methods are often quicker than getting "true" randomness from the environment.
The many applications of randomness have led to many different methods for generating random data. These methods may vary as to how unpredictable or statistically random they are, and how quickly they can generate random numbers.

Before the advent of computational random number generators, generating large amounts of sufficiently random numbers (important in statistics) required a lot of work. Results would sometimes be collected and distributed as random number tables.

Measures and tests

There are many practical measures of randomness for a binary sequence. These include measures based on frequency, discrete transforms, and complexity, or a mixture of these. These include tests by Kak, Phillips, Yuen, Hopkins, Beth and Dai, Mund, and Marsaglia and Zaman
 
Last edited:
I would disagree, the program can generate a random number by requesting it from a hardware generator.

I'm not sure why we're stuck on random numbers here.

Most likely the number is used to jump/not jump.

In either case, the behaviour of the program is known. Both branches are determined by the programmer, precisely.

If I ask for int(ran())*5, choose branch A for 0-3, and branch B for 4, I still know exactly what code is run in both situations. I have precise control. Exact control.

In a sufficiently complex organic brainputer, can the same be said? What if given all the inputs to this brainputer, you aren't sure what it will do? What if it can reason from prior experience, like we do. Apply lessons from a different problem to its current problem, like we do.

That is what I mean by a different set of ethical problems. (The idea of a brainputer of sufficient complexity to reason being stuck in a car is itself problematic).

If I write a conventional program to drive a car, I am solving a specific set of problems with a narrow set of well defined rules.

If a brainputer of sufficient complexity is driving, its behaviour could be entirely unpredictable. It might even be able to justify its decisions, whereas a conventional program has nothing to justify.

And I guess what I'm saying is that we're not going to stop at 50,000 neuron brainputers. We'll be making them bigger and more complex until the ethical issues are so far removed from a conventional computer program that we'll be in entirely uncharted territory.
 
Ethical matters only really matter for the general public, you can be rest assured that progress will be made behind closed doors regardless.
 
I'm assuming they aren't random, because no system will perform a useful action with purely random data, to my knowledge.

So they must be generated by something.

They are generated by a stcohastic search algorithm in an unsupervised manner.
EDIT: Knowing the algorithm that generated the weights doesn't help because the weight were arrived at by essentially an unknown random interaction with the environment.


The Point is - it is absolutely not clear what the program does, you would have to spend a lot of time testing, and this is a tiny neural network. Imagine looking at a table of millions of such weights and trying to work out what the system will do for a given stimulus - you can't any more than you can predict what the human brain will do with a given stimulus..
 
Last edited:
How confident are you that a grown neural device (brainputer) functions the same way as your designed & created neural net? As far as I understand it (not far), brains are self-modifying, adding new neural pathways as we learn (neuroplasticity).

Neuroplasticity (brain plasticity, cortical plasticity) was introduced by Polish scientist
Jerzy Konorski [1, 2]. It is perceived flexibility of a nervous system - an ability to be changed
(both in functional and structural areas) by processed signals (i. e. through activity, changes in
environment, etc.). Neuronal activity associated with particular function (even: whole location
of this activity within hemispheres) may change. This is a result of the nervous system
reorganization during normal performance (e. g. shaped by environment) or damage.
Undelaying mechanisms are complex and widespread anatomical, physiological, and
biochemical changes within the (survived) neural circuits. This phenomenon is perceived
basis of learning, memory, developmental changes, and compensational changes thanks to
neurorehabilitation after nervous system damages, etc

Is that also a feature of designed & created neural nets in hardware/software?
 
How confident are you that a grown neural device (brainputer) functions the same way as your designed & created neural net? As far as I understand it (not far), brains are self-modifying, adding new neural pathways as we learn (neuroplasticity).



Is that also a feature of designed & created neural nets in hardware/software?

Yes, neural plasticity is a feature and design of advanced artificial neural networks, things like spiking nets, continuous-time recurrent networks etc. What that comes down to in practice in normally adapting the network parameters - that table of numbers I posted above, they will change as the network learns and constantly adopts to its new environment.

This is actually somewhere that an artificial system could be made much more resilient and plastic than a biological brain. Brain damage is for example has very poor prognosis in most cases. Biological brains can create new axonal pathways but is weaker at generating new neurons.
 
Back
Top Bottom