Human Rights For Robots.

Which is exacrly what they did.

I'm really not seeing how they did anything other than fulfil the stated goals set by their programming.

They were intended to seek out "food" and "breed", which is what they did.

I'd be impressed if they'd spontaneously developed a new language or something outside the goals their human designers expressly gave them.
 
I'm really not seeing how they did anything other than fulfil the stated goals set by their programming.

They were intended to seek out "food" and "breed", which is what they did.

I'd be impressed if they'd spontaneously developed a new language or something outside the goals their human designers expressly gave them.

this is seen all the same ime. Some friends a paper in the prestigious jounalk Nature describung exactly that.
 
I'm really not seeing how they did anything other than fulfil the stated goals set by their programming.

They were intended to seek out "food" and "breed", which is what they did.

I'd be impressed if they'd spontaneously developed a new language or something outside the goals their human designers expressly gave them.

They where meant to cooperate and help each other.

Instead some decided to be dicks and kill thier completion to increase thier own "genes" odds
 
What is an artificial neural network if not an emulated physical network?

The brain is a physical neural net. Billions of neurons firing in parallel. Billions upon billions of connections and pathways, working independently but coherently - and crucially, simultaneously.

Your software neural net can have thousands of artificial "neural pathways" but they have to be processed in some kind of order by the machine that's emulating them, for the data to make any sense. At worse case, in serial. Even if that's a supercomputer it's still nowhere near capable of the feats of parallel processing that the brain does every microsecond.

Is it not emulation?


yoy are clearly out of your deloth. An ANN is noit emulating anything.

ANNS dfoint work in serial, wrong again.

withinbthe brain theere has to be a coherent ordering of information or by definition it is not doing anythung intelligent but random
 
http://www.popsci.com/scitech/article/2009-08/evolving-robots-learn-lie-hide-resources-each-other

There we go and this was 8 years ago.

The experiment involved 1,000 robots divided into 10 different groups. Each robot had a sensor, a blue light, and its own 264-bit binary code "genome" that governed how it reacted to different stimuli. The first generation robots were programmed to turn the light on when they found the good resource, helping the other robots in the group find it.
The robots got higher marks for finding and sitting on the good resource, and negative points for hanging around the poisoned resource. The 200 highest-scoring genomes were then randomly "mated" and mutated to produce a new generation of programming. Within nine generations, the robots became excellent at finding the positive resource, and communicating with each other to direct other robots to the good resource.
However, there was a catch. A limited amount of access to the good resource meant that not every robot could benefit when it was found, and overcrowding could drive away the robot that originally found it.
After 500 generations, 60 percent of the robots had evolved to keep their light off when they found the good resource, hogging it all for themselves. Even more telling, a third of the robots evolved to actually look for the liars by developing an aversion to the light; the exact opposite of their original programming

There exacrly what you said never happened
 
Oh great, another Glaucus.

Tell me, why should any of us bother to reply to your one-liners? You can't further a discussion with one-line put-downs.

In the world of academia, no one behaves like this and expects to be taken seriously.



I've never seen a piece of software that solved a problem its human creator did not give it the capability and the method to solve. Have you? Have you seen software built for facial recognition, that instead chooses to operate the lights instead?



Have you seen software which shows any "understanding", or that shows signs of developing "understanding", DP? Have you seen software that is "curious" about its environment? Have you seen software attempt to learn from data that it wasn't supposed to be working with?

When you write software, when do you not control the dataset that the software works on? When is that dataset "the universe", unbounded by constraints? When is the software given the dataset with no instructions on what to do with it?

You say I have no understanding of software, and who would know better than you, eh? Actually I've programmed in everything from Pascal to Prolog, assembler, Javascript, C, SQL, Haskell, Python and others*. *(Not commercially, and I make no claims to be any good at it, compared to a professional developer. Nor do I claim to know much about best practice. But I can make a computer say "Hello world" just fine :p)



i have a PhD and BSC in Artificial Intelligence and 15 years experience applying machine learning to some of the toughest real world problems. I work with experts on state of the art machine learning, regulalry review papers on artificial intelligence and attend many conferences keeping up to date with the state of the art. i have been learning about AI for the last 20 years.I have many papers in good peer-reviewed journals, co-authored dozens more, won prizes. I have an extentive professional network of friends and acquaintances within AI from industry to academia. I have met and discussed with some of the most influential academic experts of our own time.i have discussed and pitched ideas and bussiness to head researchers in many of thr worlds largest IT firms. Therr is a reason i command a 6 figure salary for working on state of the art machine learning.


You are simply out of your depth and it shows.
 
I don't share their enthusiasm either, but I think your reply might well be mistaken. A full understanding of how something works isn't necessary for making it. It's possible to make something with no understanding at all of how it works. For example, humans >15,000 years ago had no understanding of how genes work or even that genes exist but they were still able to genetically engineer animals through selective breeding in order to create dogs and different breeds of dogs. Stone age humans had no understanding of how smelting works or how alloying works but they were able to reliably do both. Bronze age (and possibly stone age) humans had no understanding of bacteria or even any knowledge that bacteria existed or even that any such thing could exist, but they still made and used treatments using penicillin. Etc.

So I think it's possible that humans might at some point make a mind without anything close to an understanding of how a mind works. Or even what a mind really is.


moreiver,thge human brain us just one possible configuration of molecules that yields intelligence.
 
i have a PhD and BSC in Artificial Intelligence and 15 years experience applying machine learning to some of the toughest real world problems. I work with experts on state of the art machine learning, regulalry review papers on artificial intelligence and attend many conferences keeping up to date with the state of the art. i have been learning about AI for the last 20 years.I have many papers in good peer-reviewed journals, co-authored dozens more, won prizes. I have an extentive professional network of friends and acquaintances within AI from industry to academia. I have met and discussed with some of the most influential academic experts of our own time.i have discussed and pitched ideas and bussiness to head researchers in many of thr worlds largest IT firms. Therr is a reason i command a 6 figure salary for working on state of the art machine learning.

You are simply out of your depth and it shows.

I would quite happily bet everything I own and everything in my bank balance - at any point in my life - on the fact that we will not develop sentient, thinking machines in my lifetime (or decades after I'm dead).

The idea is ridiculous.

But, just for the lols, tell us when you personally expect such a thing. How close you think we are.

And also on a social level, if you think human society would allow such a thing to be created, to exist, and to act autonomously. I can't see it. There would be riots, and rightly so.

e: It's also curious how you won't address any question I ask you, but just respond with "I've got a PhD" and more put-downs. Surely with all your expertise you should be able to answer the actual questions put to you. But hey ho.
 
Last edited:
yoy are clearly out of your deloth. An ANN is noit emulating anything.

ANNS dfoint work in serial, wrong again.

withinbthe brain theere has to be a coherent ordering of information or by definition it is not doing anythung intelligent but random

And this is where I start to get suspicious of your claims, D.P.

Here's an extract from a patent filed in 2012, so not that long ago. Indexed by Google. It agrees with what I've said.

Neural networks that have been developed to date are largely software-based. A true neural network (e.g., the human brain) is massively parallel (and therefore very fast computationally) and very adaptable. For example, half of a human brain can suffer a lesion early in its development and not seriously affect its performance. Software simulations are slow because during the learning phase a standard computer must serially calculate connection strengths. When the networks get larger (and therefore more powerful and useful), the computational time becomes enormous. For example, networks with 10,000 connections can easily overwhelm a computer. In comparison, the human brain has about 100 billion neurons, each of which is connected to about 5,000 other neurons. On the other hand, if a network is trained to perform a specific task, perhaps taking many days or months to train, the final useful result can be etched onto a piece of silicon and also mass-produced.
[0012]
A number of software simulations of neural networks have been developed. Because software simulations are performed on conventional sequential computers, however, they do not take advantage of the inherent parallelism of neural network architectures. Consequently, they are relatively slow. One frequently used measurement of the speed of a neural network processor is the number of interconnections it can perform per second. For example, the fastest software simulations available can perform up to about 18 million interconnects per second. Such speeds, however, currently require expensive super computers to achieve. Even so, 18 million interconnects per second is still too slow to perform many classes of pattern classification tasks in real time. These include radar target classifications, sonar target classification, automatic speaker identification, automatic speech recognition, and electro-cardiogram analysis, etc.
[0013]
The implementation of neural network systems has lagged somewhat behind their theoretical potential due to the difficulties in building neural network hardware. This is primarily because of the large numbers of neurons and weighted connections required. The emulation of even of the simplest biological nervous systems would require neurons and connections numbering in the millions. Due to the difficulties in building such highly interconnected processors, the currently available neural network hardware systems have not approached this level of complexity. Another disadvantage of hardware systems is that they typically are often custom designed and built to implement one particular neural network architecture and are not easily, if at all, reconfigurable to implement different architectures. A true physical neural network chip, for example, has not yet been designed and successfully implemented.

Source:
https://www.google.com/patents/US20120150780
 
i answered your question about why you should listen to me by providing my credentials.

the reason i am making shoirt and blund replies is i am throwing up every hour,replying on my phone between bout s of vomiting.

you are first going to havr to providr your personal definition of these concepts becuase they don't follow academic definitions:
intelligence
thinking
learning
adaptation
sentience


to answer you question, we already have thinking machines. I dont have a personal prediction for sentience but one of the worlds experts on the matter, Dr Holland, was on my doctoral thesis examination committee. he predicted within 20-30 years, that was already some time back.
 
Eh? Thinking, reasoning, philosophy, curiosity, etc, all require consciousness. A mind.

I'm sure you're not saying we've developed conscious machines with a mind of their own.

So whatever definition of thinking (etc) you're using must be more abstract.

Plus have a look a couple posts up about the emulation point I made earlier.
 
And this is where I start to get suspicious of your claims, D.P.

Here's an extract from a patent filed in 2012, so not that long ago. Indexed by Google. It agrees with what I've said.



Source:
https://www.google.com/patents/US20120150780

i guess you have never heard of deep-learning on GPUs, CUDA NN,etc., computing thousands of calcus in parallel. let alone the dedicated ASICs and FPGA hardware doing tens of thousands in parallel and then combining thousands of those chips to run millions of calcs in parallel.



not thgat it makes a difference at all, only total throughout.
 
Well the whole point is moot if you assert we already have "thinking" machines.

I'm curious as to what criteria you have for thought.

Because now it seems we have to distinguish between human-level thought processes and whatever "thinking" current machines are able to perform. I'm sure they aren't equivalent.
 
Eh? Thinking, reasoning, philosophy, curiosity, etc, all require consciousness. A mind.

I'm sure you're not saying we've developed conscious machines with a mind of their own.

So whatever definition of thinking (etc) you're using must be more abstract.

Plus have a look a couple posts up about the emulation point I made earlier.

thinking is mnerely using intelligence to deduct a rationbal outcome. Machinbes that do that are common as muck.

since when does reasoning require a mind? What does philosophy haveto do with thinking?


i explicitly saidwe dont have sentient machines currently but we certainly have thinking machines. my Amazon fiure is currently thinking about wgat TV shows i might want toi watch, using reasoning and intelligence to provide rational and thoughtful suggestions.



asd fir emulation. somne reser tryto emulate biological neural nets to understand physical processes. No one woring on machine intelligence tries to emulate nature, its a pointless waste oif CPu cycvkes iuf your goiasl is to engineer a viable solution.
 
Last edited:
Well the whole point is moot if you assert we already have "thinking" machines.

I'm curious as to what criteria you have for thought.

Because now it seems we have to distinguish between human-level thought processes and whatever "thinking" current machines are able to perform. I'm sure they aren't equivalent.

ive asked you to give your definitions because they likely dfont match academic definitions.
 
i explicitly saidwe dont have sentient machines currently but we certainly have thinking machines. my Amazon fiure is currently thinking about wgat TV shows i might want toi watch, using reasoning and intelligence to provide rational and thoughtful suggestions

I'm sorry but I can't take this seriously.

Your Amazon Kindle Fire is "thinking" and using its "intelligence"? I really don't see the point in continuing when your (and if what you say is true, the whole academic community's) idea of "thought" and "intelligence" is so far removed from the layman's (my) idea of "thought" and "intelligence".

They are clearly not the same, and therefore I can't bring myself to go down this particular rabbit hole.

I guess the word "intelligence" counts for very little if an Amazon Echo can legitimately be called an intelligent, thinking machine.
 
see, your definition of intelligence is completely outsude tge academic communities.
agfainb,please define what you think intelligence is.
 
here is the dictionary definition for good measure


Definition of intelligence
1
a (1) : the ability to learn or understand or to deal with new or trying situations : reason; also : the skilled use of reason
(2) : the ability to apply knowledge to manipulate one's environment or to think abstractly as measured by objective criteria (as tests)


it ius akso worthwhile understanding what things like "military intelligence" , signals intelligence, business intelligence, intelligence tests, etc. mean.
 
Last edited:
Intelligence includes (this list is not exclusive). Certain things on this list may not technically define intelligence but arise as a result of intelligence.

Intuition
Reasoning and deduction, problem solving
Posturing and theorising
Using experience (patterns) from a non-related field to solve a problem in a new, hitherto unseen problem field, without being shown or prompted
Understanding, comprehension
Defining and applying context to a situation
Discovery and exploration
Curiosity
The self-driven desire to seek new knowledge and experience
Using tools, inventing new tools
Exploring relationships with others
Having goals, aspirations
Conflict resolution
"Higher-level" thinking. Philosophy.
Ability to invent and work with abstract concepts.

There's many more but this is a start.

I've deliberately tried to frame these in human terms, to narrow down on human intelligence.
 
Back
Top Bottom