But in one case we are 100% responsible for the program, and /precisely/ define the outcomes of all branches of that program. The program can be reviewed and analysed by anybody who can read code.
Nope, that is the entire point of AI systems, the develop is not 100% responsible for the program and the outcomes is anything but precisely defined. The program may not be readable by anyone, analysis can be extremely complex.
Here is an example. The following are weights and activation threshold for a very small neural network. All of these numbers were derived through an entirely stochastic process using a metaheauristic search techniques within an unsupervised learning system. There are 10 stimuli inputs into this neural network which controls a robot, and 3 outputs to control left right and speed. The 10 input values are brightness levels (0255) form 10 evenly spaced locations around a 360* camera. Now how does it work?
Now imagine the same table that is hundreds of thousand of columns and hundreds of thousands of rows, and the values change all the time as the system learns
Code:
11164 36318 75061 37674 26320 75100 10431 20418 19228 91792
21215 91791 76831 58678 87054 31687 93205 43685 19732 08468
10438 44482 66558 37649 08882 90870 12462 41810 01806 02977
36792 26236 33266 66583 60881 97395 20461 36742 02852 50564
73944 04773 12032 51414 82384 38370 00249 80709 72605 67497
49563 12872 14063 93104 78483 72717 68714 18048 25005 04151
64208 48237 41701 73117 33242 42314 83049 21933 92813 04763
51486 72875 38605 29341 80749 80151 33835 52602 79147 08868
99756 26360 64516 17971 48478 09610 04638 17141 09227 10606
71325 55217 13015 72907 00431 45117 33827 92873 02953 85474
65285 97198 12138 53010 94601 15838 16805 61004 43516 17020
17264 57327 38224 29301 31381 38109 34976 65692 98566 29550
95639 99754 31199 92558 68368 04985 51092 37780 40261 14479
61555 76404 86210 11808 12841 45147 97438 60022 12645 62000
78137 98768 04689
And a very large neural network from a deep learning system is also not fully understood and there was also no control in how it developed or develops.On the other hand, the "program" you refer to with a developing brain is neither fully understood, nor fully under our control.
Why are they different?How can they not be different situations?
What happens if we do fully understand the human brain down to every single neuron, will that instantaneously make the human brain some how in a different class?
Ie, you can scan a human brain all you like, but you can't print off a precise program predicting exactly what we will say and do next.
We can predict very accurately for a lot of event though. The human brain is very deterministic, you feed it a stimuli and it responds. You display a triangle and we know what neurons in your brain will fire to process those edges. Flash up a photos of a snake and we know what neurons will fire in response. When someone sees a photos of a friends face we know exactly what parts of the brain are operating, much of how the brain is recognizing the face, what neurons will liekly fire, what other parts of the brain will become active.
That is about as much as we know from very large deep learning computer network.
No its not, and quite trivially so. Very easy to provide a stochastic behavior. And even without that trick this is actually one of the biggest weaknesses in a lot of machine learning. You give an AI system a novel stimulus and you can't be sure at all what it will do, over-fitting and under-sampled data are the biggest challenges. Humans are actually much more predictable in these cases, that is partly why we are so intelligent. Humans have a veyr stable response, AI system care currently very fragile and unpredictable.A computer system with software is entirely predictable.
This is the major technical challenge in autonomous driving, under normal conditions the cars drive as well or better than humans. You push some parameters outside its learned envelope and it may fail catastrophically in completely unpredictable ways.
Last edited:


