Human Rights For Robots.

Can confirm this AI stuff is getting to your head. "they are intelligent"?

It, lol.

But then it is not intelligent by your own definition. A computer cannot be "rational" and cannot have "mental" capacity, these things require a psyche, an identification of one's self, self-awareness. These things are psychological processes, not algorithmic computations. You go on about not mimicking/emulating human traits but then you totally forget you're talking about mimicking intelligence lol.

An algorithm which appears to do something which could be considered "intelligent" relative to human mental capability does not mean it is literally intelligent.


I think you need to buy yourself a dictionary. Something which acts intelligently is intelligent, that is the definition of intelligence
 
D.P. said:
Definition of intelligent
1
a : having or indicating a high or satisfactory degree of intelligence and mental capacity
b : revealing or reflecting good judgment or sound thought : skillful

2
a : possessing intelligence
b : guided or directed by intellect : rational

A machine has no mental capacity ('of or relating to the mind' - machines have no mind).

A machine has no judgement or thinking ability. A machine can evaluate, compare, pattern match - not exercise good judgement (or poor judgement).

A quick look at the word "intellect" has most definitions related to mental capacity, reasoning, understanding, comprehension, thought, judgement, wisdom.

You typically don't describe a machine as "wise" or any of those other adjectives.

Machines are only "intelligent" if you use the an extremely lax definition of that word; indeed you provided a definition earlier that explicitly defined a machine which could compute as "intelligent".

I'm afraid that many of us find such definitions of intelligence contrary to common sense, and wholly unsatisfactory.
 
Do we have "judgement" any more than a well programmed machine? Surely our entire basis for judgement is based on what we've been "programmed" with during our lives?
 
Do we have "judgement" any more than a well programmed machine? Surely our entire basis for judgement is based on what we've been "programmed" with during our lives?

Consider that we have internal struggles, competing and contradictory motivations, etc. We have desires that we can choose to satisfy or ignore. Traits that we can dislike and work on eradicating, or traits that we admire in others and want to replicate.

We have base instinct but we can override it with a conscious effort.

Eg, on learning that a person tends to choose the right door if he's right-handed, and the left door if he's left handed, a person who remembers this fact can consciously decide to choose the opposite door to the one he would be inclined to take. Silly example, but it shows how we have the ability to make high-level decisions which override our "programming".

Sometimes what we do depends on our mood when we get up in the morning ;)

The idea that we have programming which in any way resembles the controlling logic of a machine is really at odds with reality.

To accept/assert that we are pre-programmed to any significant degree is to reduce every one of our lives to meaninglessness. We would simply be going through the motions, with no real choice at all. Like a machine.
 
What he said was pretty literal. I simply do not understand how you have deciphered what you have just said from his post. "intelligence" is literal, I simply refuse to alter the definition of intelligence just so some students can call their computers "intelligent".

How can you emulate intelligence without emulating how human brains work when intelligence is bound to and is a product of brain matter?

The issue here is some people do not understand what intelligence is. And continuously misinterprets the "intelligence" in "artificial intelligence", to mean literal "intelligence". As I have already said, a computer performing programmed algorithmic calculations, yielding useful answers, is not evidence of some sort of internal intelligence in the computer lmao.

A rocket vs bipedal walking are two completely different things with two completely different goals, and then I don't see what that has to do with AI.

Are you telling me the rocket is/is supposed to be "intelligent"?

What's wrong with an unintelligent rocket such as when they flew to the moon?

Also, a car vs walking are also two separate things, and I don't understand what this has to do with AI either. I can drive better than any autonomous system. Things like the Tesla system are not "intelligent" and never will be. They are only as good as the algorithms programmed into it. It cannot recognise and react to something which it hasn't been programmed with, whereas I as a human can.




A car or rocket is NOT blanket "better" nor "more efficient" than bipedal walking lmao. Saying a rocket/car is more efficient than walking simply shows that you are failing to recognise all the variables and factors involved in the physical traversal of an obstacle.

The obstacle between the moon and earth is mavity, air and space. This is why thrust is required to traverse between the two. Just because a rocket can get to the moon, doesn't simply make it a better mode of transport than two legs :D. Do you realise how much energy a rocket requires just to counter mavity??? Yet I can launch myself into the air with my bipedal system using barely any energy at all!



1 i got that from what he said because i understand english and thats what he said.



And lol seriously you're now trying to argue that bipedal walking platforms would be better than cars, lorry trains etc?


Wheels are inherently more efficient as they dont need to pick themselves up each time.
 
1 i got that from what he said because i understand english and thats what he said.



And lol seriously you're now trying to argue that bipedal walking platforms would be better than cars, lorry trains etc?


Wheels are inherently more efficient as they dont need to pick themselves up each time.

You seem to work on limited number of factors at a given time, look at the bigger picture. As I said, I can't blanket claim bipedal walking is "better/efficient" just like you cant say a combustion engine on wheels is "better/efficient", there are other factors such as the actual obstacle which needs to be traversed. Each is adapted to certain tasks. And bipedal walking is humanity's own established adaptation, and we need to respect that and keep using our legs.

"Bipedal walking platforms" are better for going up stairs or climbing anything. Try going up stairs or climbing something with your remote control car.

But of course the logical thing for you to say is, why do we need to climb? But then due to your limited factorial ability, you will not see how this will simply further destroy humanity. I mean obesity is now established in humanity because of the drive to make personal energy acquisition/replenishment easier than going for a poo.
 
Last edited:
11111122.jpg
 
I have no idea what drivel this is but it totally misses the goal posts. I gave an analogy using locomotion. If the tasks is to move form A to B then there are lots of ways of designing a machine to do that, they don't have to emulate a human and in fact it may be preferable not to. I have no idea what the rest of that garage is about, since when did I state a car is intelligent? You are going to have to provide a citation if you one anyone to believe that the current approach to AI will never rival the brain. What is the evidence or proof to make such a strong assertion?


So does the human brain.

Except they can already be used for intelligence, in some domains exceeding human intelligence even.






Because that is the definition of artificial intelligence. you seem to be very confused on the subject.
https://www.merriam-webster.com/dictionary/artificial intelligence




Now you are just going bat **** crazy. :confused:


Your definition if intelligence doesn't match any standard dictionary definition.

When a system possess those capabilities they are intelligent.


What is better than weak AI? What limitations does weak AI have? Weak AI can develop system of far greater intelligence than strong AI. There is absolutely no requirement to understand biological intelligence in order to create artifcially intelligent systems. Understanding biological intelligence is a completely different problem. Nature may provide insights, often times it doesn't. As I expand before Nature is frequently limited or sub-optimal since natural system are developed without a goal.


ehh, that has nothing to do with the topic.




What kind of nonsensical mumbo jumbo is this? Understanding how life formed has absolutely zero relevance to engineering intelligent machines, that already exist in every bodies homes and trouser pockets today.



Are you devoutly religious or spiritual by any chance? You seem to be suggesting there is some special qualities of humans that goes beyond the laws of physics. this is somewhat flawed logic, just as earlier Christians had to realize the earth was not built for them and was not at the center of the solar system/universe, some of you need to realize that there is nothing special about humans. We are merely more complex chimpanzee that evolved form our ancestors in an incredibly short amount of time. that added complexity is evidently extremely quick and easy to develop occurring in the smallest fraction of evolutionary time.

Unless you believe in some divine intervention e.g. Humans came form God directly and thus have supernatural capabilities, then the difference between humans and animals is fairly irrelevant in the grand scheme of things.

Represented as a 24 hour clock, life on earth appeared 3bn years ago and modern humans arrived a mere 4 seconds before midnight. Everything we take as advanced expert knowledge and intelligent behaviors manipulating the environment occur within small fraction of second at this scale.
evo.jpeg

Why so hostile? Bad day? :)

My garbage, bat**** crazy drivel can be summarised as follows:

You said:
yoy dont need to understand hoe the mind works to create intelligence.

I believe the only type of intelligence that can be created without understanding how the mind works is Weak AI and the reasons why I believe this are simple. Firstly, all of the algorithms ever written, neural networks or machines ever built are, at best, weak AI. Secondly, attempts to replicate the simplest biological nervous systems have failed, despite using machinery capable of billions of calculations per second and thus having a computing power many orders of magnitude larger than the computing power of the nervous system.

Weak-AI is limited by the task for which it was built, it's not intelligent in the true sense of the concept, it's just a complex tool.

If the brain is Turing complete machine and you want to simulate it on another Turing complete machine, you must first model things such self-awareness or emotional intelligence. In other words, you must first understand it, model its characteristics and only then can you successfully simulate it.
 
It's an opinion based on the arguments I put forward. Do you think that Strong AI will rise up out of a random Weak-AI without us specifically modelling its characteristics?

I don't care.

As in the Turing test. if a computer can imitate a human, does it matter how it works?
 
It's an opinion based on the arguments I put forward. Do you think that Strong AI will rise up out of a random Weak-AI without us specifically modelling its characteristics?

I'm not sure why those are the only two options and I'm not seeing the reason for your belief that we must first be able to model biological systems before making progress in AI research.
 
I don't care.

As in the Turing test. if a computer can imitate a human, does it matter how it works?

Can someone post some examples of a computer that can imitate a human?

I've seen this turing test being mentioned a zillion times with no actual links/evidence. Unless I've missed it somewhere.


Lmao. So I've just googled Turing test example and this is the first result: http://blog.oxforddictionaries.com/2012/06/turing-test/

ELIZA: How do you do. Please tell me your problem.
Me: I am bored.
ELIZA: Is something bothering you.
Me: No I am just bored.
ELIZA: Please tell me what is bothering you.
Me: Boredom is bothering me.
ELIZA: Is something troubling you ?


Rofl, yes it's put a space before the punctuation so it must be human right? :D. Surely there's gotta be something better than this why the heck is that thing top result, it even says in the article "ELIZA does not pass a Turing test." Yeah no ****. lol.
 
Last edited:
Can someone post some examples of a computer that can imitate a human?

I've seen this turing test being mentioned a zillion times with no actual links/evidence. Unless I've missed it somewhere.


Lmao. So I've just googled Turing test example and this is the first result: http://blog.oxforddictionaries.com/2012/06/turing-test/

ELIZA: How do you do. Please tell me your problem.
Me: I am bored.
ELIZA: Is something bothering you.
Me: No I am just bored.
ELIZA: Please tell me what is bothering you.
Me: Boredom is bothering me.
ELIZA: Is something troubling you ?


Rofl, yes it's put a space before the punctuation so it must be human right? :D. Surely there's gotta be something better than this why the heck is that thing top result, it even says in the article "ELIZA does not pass a Turing test." Yeah no ****. lol.

I'm not sure how, what you have posted above (a random excerpt of an old test attempt) relates to my question that you also quoted.

Are you suggesting the concept of Turing's test is flawed, or that a computer's ability to imitate or deceive should it pass Turing's test is not useful evidence or a measure of artificial intelligence?


The Chinese Room point is interesting, but in honesty if machines appear to do the same as humans would it matter that we would probably know more about the mechanism with which they do so, than those they imitate?

As for passing the "Turing Test" as outlined by Turing, I honestly don't think the money/companies looking at AI are actually pushing for that goal. All I suggested is that to claim we are in the same place we were with AI in the 60's 70's 80's 90's and even early 00's is kind of silly.

This isn't a game of deception, but will you look at the natural language on that:

 
I'm not sure how, what you have posted above (a random excerpt of an old test attempt) relates to my question that you also quoted.

Are you suggesting the concept of Turing's test is flawed, or that a computer's ability to imitate or deceive should it pass Turing's test is not useful evidence or a measure of artificial intelligence?


The Chinese Room point is interesting, but in honesty if machines appear to do the same as humans would it matter that we would probably know more about the mechanism with which they do so, than those they imitate?

As for passing the "Turing Test" as outlined by Turing, I honestly don't think the money/companies looking at AI are actually pushing for that goal. All I suggested is that to claim we are in the same place we were with AI in the 60's 70's 80's 90's and even early 00's is kind of silly.

This isn't a game of deception, but will you look at the natural language on that:


Sorry, what I'm asking is has anything ever passed this "Turing test". It keeps getting mentioned by several people and I haven't been able to find any demonstration.

Also, that is not natural language at all. In fact there is no language even involved. Just because it's synthesising text-to-speech, doesn't mean it's actively using language or talking.

This is just another tangent linked to the confusion of "intelligence". Just because a machine can perform programmed calculations, doesn't mean it's literally intelligent. In the same sense, just because speech is being synthesised and coming out of a speaker, doesn't mean anything is talking.

Surely, someone inputting "lets finish this column" into a software speech synthesizer doesn't suddenly mean the computer is talking.
 
Last edited:
  • Simultaneous tests as specified by Alan Turing
  • Each judge was involved in five parallel tests - so 10 conversations
  • 30 judges took part
  • In total 300 conversations
  • In each five minutes a judge was communicating with both a human and a machine
  • Each of the five machines took part in 30 tests
  • To ensure accuracy of results, Test was independently adjudicated by Professor John Barnden, University of Birmingham, formerly head of British AI Society

http://www.reading.ac.uk/news-and-events/releases/PR583836.aspx
 
Sorry, what I'm asking is has anything ever passed this "Turing test". It keeps getting mentioned by several people and I haven't been able to find any demonstration.

Also, that is not natural language at all. In fact there is no language even involved. Just because it's synthesising text-to-speech, doesn't mean it's actively using language or talking.

This is just another tangent linked to the confusion of "intelligence". Just because a machine can perform programmed calculations, doesn't mean it's literally intelligent. In the same sense, just because speech is being synthesised and coming out of a speaker, doesn't mean anything is talking.

Surely, someone inputting "lets finish this column" into a software speech synthesizer doesn't suddenly mean the computer is talking.

Hmmm you seem to not really have understood anything of what IBM's Watson does in that video.

https://en.wikipedia.org/wiki/Jeopardy!

The show features a quiz competition in which contestants are presented with general knowledge clues in the form of answers, and must phrase their responses in the form of questions.

Clearly the question (which comes oddly in the form of a proposed answer) is in Natural Language, to be successful the contestant (human or otherwise) has to decode that natural language and apply general knowledge principles to formulate a suitable response (in the form of a Natural Language question).

Frankly the speech synth tech is not particularly important.

I do know they actually fed the questions in textual form to Watson, which lessons the achievement. But there wasn't a really good human Jeopardy player sitting inside the IBM box, typing up answers for Watson to Speak out in Steven Hawkins style "voice".

In any case you seem to have a lack of understanding/experience of what constitutes natural language processing?
 
Back
Top Bottom